InferX — Serverless GPU Inference Platform for Production Workloads

Funcpod

Tenant Namespace Podname Model
public Trial public/Trial/translategemma-27b-it-FP8-Dynamic/112/138 translategemma-27b-it-FP8-Dynamic

State

State Time
Init 2026-03-01 23:30:41
PullingImage 2026-03-01 23:30:41
Creating 2026-03-01 23:30:41
Restoring 2026-03-01 23:30:43
Standby 2026-03-01 23:30:43
Resuming 2026-03-01 23:35:17
Ready 2026-03-01 23:35:20

Log

INFO 03-01 23:35:20 [logger.py:42] Received request cmpl-d02ee846bf58e936eb2d07297376a1b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=1000, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.4:123 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:20 [async_llm.py:261] Added request cmpl-d02ee846bf58e936eb2d07297376a1b9-0.
INFO 03-01 23:35:21 [logger.py:42] Received request cmpl-812a0317448e4893bf9046b507ad4020-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:21 [async_llm.py:261] Added request cmpl-812a0317448e4893bf9046b507ad4020-0.
INFO 03-01 23:35:22 [logger.py:42] Received request cmpl-cbf0f04c013445e18eb44cf0b1e8a57f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:22 [async_llm.py:261] Added request cmpl-cbf0f04c013445e18eb44cf0b1e8a57f-0.
INFO 03-01 23:35:24 [logger.py:42] Received request cmpl-09fb7f9ca36341c5805ab8d1a94cad7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:24 [async_llm.py:261] Added request cmpl-09fb7f9ca36341c5805ab8d1a94cad7f-0.
INFO 03-01 23:35:25 [logger.py:42] Received request cmpl-ed50d99b3511441d91775a89dc07e6b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:25 [async_llm.py:261] Added request cmpl-ed50d99b3511441d91775a89dc07e6b5-0.
INFO 03-01 23:35:26 [loggers.py:116] Engine 000: Avg prompt throughput: 0.1 tokens/s, Avg generation throughput: 0.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 13.8%, Prefix cache hit rate: 0.0%
INFO 03-01 23:35:26 [logger.py:42] Received request cmpl-eac1adb5faf640e5a813d2ca64ba8593-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:26 [async_llm.py:261] Added request cmpl-eac1adb5faf640e5a813d2ca64ba8593-0.
INFO 03-01 23:35:27 [logger.py:42] Received request cmpl-7aed724a4fd34757b65a2dcfd8997c39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:27 [async_llm.py:261] Added request cmpl-7aed724a4fd34757b65a2dcfd8997c39-0.
INFO 03-01 23:35:28 [logger.py:42] Received request cmpl-6babf6fd08bd44c282b727b86ed97397-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:28 [async_llm.py:261] Added request cmpl-6babf6fd08bd44c282b727b86ed97397-0.
INFO 03-01 23:35:29 [logger.py:42] Received request cmpl-80e3048984f74a8496297c26e22b5c69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:29 [async_llm.py:261] Added request cmpl-80e3048984f74a8496297c26e22b5c69-0.
INFO 03-01 23:35:30 [logger.py:42] Received request cmpl-90f24e879ea84998b5ffc48b21aa490f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:30 [async_llm.py:261] Added request cmpl-90f24e879ea84998b5ffc48b21aa490f-0.
INFO 03-01 23:35:31 [logger.py:42] Received request cmpl-71850534b81b4fe782c014f5f53c0150-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:31 [async_llm.py:261] Added request cmpl-71850534b81b4fe782c014f5f53c0150-0.
INFO 03-01 23:35:32 [logger.py:42] Received request cmpl-045e74b687bc4998a4d0deebb1df2bae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:32 [async_llm.py:261] Added request cmpl-045e74b687bc4998a4d0deebb1df2bae-0.
INFO 03-01 23:35:33 [logger.py:42] Received request cmpl-53e3900d91a7423396fc7674857f7ee1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:33 [async_llm.py:261] Added request cmpl-53e3900d91a7423396fc7674857f7ee1-0.
INFO 03-01 23:35:35 [logger.py:42] Received request cmpl-03c2aee705034444af1898569b3cc6c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:35 [async_llm.py:261] Added request cmpl-03c2aee705034444af1898569b3cc6c2-0.
INFO 03-01 23:35:36 [logger.py:42] Received request cmpl-2304baae5c814e7b9c9c4b15203f2ffc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:36 [async_llm.py:261] Added request cmpl-2304baae5c814e7b9c9c4b15203f2ffc-0.
INFO 03-01 23:35:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 68.5 tokens/s, Running: 2 reqs, Waiting: 0 reqs, GPU KV cache usage: 40.8%, Prefix cache hit rate: 0.0%
INFO 03-01 23:35:37 [logger.py:42] Received request cmpl-fbfde6fefa13431da303cb1971353cc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:37 [async_llm.py:261] Added request cmpl-fbfde6fefa13431da303cb1971353cc1-0.
INFO 03-01 23:35:38 [logger.py:42] Received request cmpl-109d3b213578492385de4368b1622d25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:38 [async_llm.py:261] Added request cmpl-109d3b213578492385de4368b1622d25-0.
INFO 03-01 23:35:39 [logger.py:42] Received request cmpl-645a36dfc6f44c37b30d956b1dff29b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:39 [async_llm.py:261] Added request cmpl-645a36dfc6f44c37b30d956b1dff29b9-0.
INFO 03-01 23:35:40 [logger.py:42] Received request cmpl-4b9d436212034a3ca7d69bbac74908de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:40 [async_llm.py:261] Added request cmpl-4b9d436212034a3ca7d69bbac74908de-0.
INFO 03-01 23:35:41 [logger.py:42] Received request cmpl-775cb74cd87d4c028aa4b56c8ab9a141-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:41 [async_llm.py:261] Added request cmpl-775cb74cd87d4c028aa4b56c8ab9a141-0.
INFO 03-01 23:35:42 [logger.py:42] Received request cmpl-24904afe9379439db834061c13649c6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:42 [async_llm.py:261] Added request cmpl-24904afe9379439db834061c13649c6c-0.
INFO 03-01 23:35:43 [logger.py:42] Received request cmpl-ad93d0e6f165458d93801789665684a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:43 [async_llm.py:261] Added request cmpl-ad93d0e6f165458d93801789665684a4-0.
INFO 03-01 23:35:44 [logger.py:42] Received request cmpl-86c780ae83124bafbb029b08c4c3f3a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:44 [async_llm.py:261] Added request cmpl-86c780ae83124bafbb029b08c4c3f3a7-0.
INFO 03-01 23:35:45 [logger.py:42] Received request cmpl-0455e7644c294b348b6edc3d5c4dad5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:45 [async_llm.py:261] Added request cmpl-0455e7644c294b348b6edc3d5c4dad5d-0.
INFO 03-01 23:35:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 9.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:35:46 [logger.py:42] Received request cmpl-e8009490d7e04437b793a65678a753ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:46 [async_llm.py:261] Added request cmpl-e8009490d7e04437b793a65678a753ad-0.
INFO 03-01 23:35:48 [logger.py:42] Received request cmpl-bd878efdaa2240dbbbcded85cfb0bce2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:48 [async_llm.py:261] Added request cmpl-bd878efdaa2240dbbbcded85cfb0bce2-0.
INFO 03-01 23:35:49 [logger.py:42] Received request cmpl-f57b8af0c7354826b9713a796464e75d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:49 [async_llm.py:261] Added request cmpl-f57b8af0c7354826b9713a796464e75d-0.
INFO 03-01 23:35:50 [logger.py:42] Received request cmpl-a387c217453b4271a072313ad29352da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:50 [async_llm.py:261] Added request cmpl-a387c217453b4271a072313ad29352da-0.
INFO 03-01 23:35:51 [logger.py:42] Received request cmpl-12783b43e25b41db9408209c9605d597-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:51 [async_llm.py:261] Added request cmpl-12783b43e25b41db9408209c9605d597-0.
INFO 03-01 23:35:52 [logger.py:42] Received request cmpl-0004d81a520346afb2843828761e77c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:52 [async_llm.py:261] Added request cmpl-0004d81a520346afb2843828761e77c1-0.
INFO 03-01 23:35:53 [logger.py:42] Received request cmpl-b1e3a66f6bca4d0ea9d496beead7b3ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:53 [async_llm.py:261] Added request cmpl-b1e3a66f6bca4d0ea9d496beead7b3ee-0.
INFO 03-01 23:35:54 [logger.py:42] Received request cmpl-9e14ea6b77b7496086f3841738ed6950-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:54 [async_llm.py:261] Added request cmpl-9e14ea6b77b7496086f3841738ed6950-0.
INFO 03-01 23:35:55 [logger.py:42] Received request cmpl-49594465572b49d2b054f81899fedd25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:55 [async_llm.py:261] Added request cmpl-49594465572b49d2b054f81899fedd25-0.
INFO 03-01 23:35:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:35:56 [logger.py:42] Received request cmpl-2174a73f94de401589db03aed86e6254-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:56 [async_llm.py:261] Added request cmpl-2174a73f94de401589db03aed86e6254-0.
INFO 03-01 23:35:57 [logger.py:42] Received request cmpl-d85662511d4c4c9d80b10f15837f42d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:57 [async_llm.py:261] Added request cmpl-d85662511d4c4c9d80b10f15837f42d0-0.
INFO 03-01 23:35:58 [logger.py:42] Received request cmpl-c6ef8cdb9b394d928be882514f1ea171-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:58 [async_llm.py:261] Added request cmpl-c6ef8cdb9b394d928be882514f1ea171-0.
INFO 03-01 23:35:59 [logger.py:42] Received request cmpl-5aa4c1075443425aa6b79c19a756d280-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:59 [async_llm.py:261] Added request cmpl-5aa4c1075443425aa6b79c19a756d280-0.
INFO 03-01 23:36:01 [logger.py:42] Received request cmpl-206e1b3f55a74113a506440d4e14573b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:01 [async_llm.py:261] Added request cmpl-206e1b3f55a74113a506440d4e14573b-0.
INFO 03-01 23:36:02 [logger.py:42] Received request cmpl-9424609892ef4e34b3f787b86a80dd83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:02 [async_llm.py:261] Added request cmpl-9424609892ef4e34b3f787b86a80dd83-0.
INFO 03-01 23:36:03 [logger.py:42] Received request cmpl-446156d4ec034525a143b08ce235f0b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:03 [async_llm.py:261] Added request cmpl-446156d4ec034525a143b08ce235f0b1-0.
INFO 03-01 23:36:04 [logger.py:42] Received request cmpl-86fba434294b49f4ae608f6a31ffb355-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:04 [async_llm.py:261] Added request cmpl-86fba434294b49f4ae608f6a31ffb355-0.
INFO 03-01 23:36:05 [logger.py:42] Received request cmpl-f95db5bd7f53420aa08f4cc34ff6ef5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:05 [async_llm.py:261] Added request cmpl-f95db5bd7f53420aa08f4cc34ff6ef5f-0.
INFO 03-01 23:36:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:36:06 [logger.py:42] Received request cmpl-0b9f07ee2300440f9090f2577bfd60c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:06 [async_llm.py:261] Added request cmpl-0b9f07ee2300440f9090f2577bfd60c3-0.
INFO 03-01 23:36:07 [logger.py:42] Received request cmpl-aba33f9867614dee81d5a283cbb39e6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:07 [async_llm.py:261] Added request cmpl-aba33f9867614dee81d5a283cbb39e6f-0.
INFO 03-01 23:36:08 [logger.py:42] Received request cmpl-101abf9df1954c86abf780faf978402a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:08 [async_llm.py:261] Added request cmpl-101abf9df1954c86abf780faf978402a-0.
INFO 03-01 23:36:09 [logger.py:42] Received request cmpl-79552c12e7ee46739a2316d43438fada-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:09 [async_llm.py:261] Added request cmpl-79552c12e7ee46739a2316d43438fada-0.
INFO 03-01 23:36:10 [logger.py:42] Received request cmpl-1eed3a369e694962b523a569cc3a3ff0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:10 [async_llm.py:261] Added request cmpl-1eed3a369e694962b523a569cc3a3ff0-0.
INFO 03-01 23:36:11 [logger.py:42] Received request cmpl-6bdd254844ec4fceba980f80ffbb6e16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:11 [async_llm.py:261] Added request cmpl-6bdd254844ec4fceba980f80ffbb6e16-0.
INFO 03-01 23:36:12 [logger.py:42] Received request cmpl-e158bcfd01ee493eb352191a1879b6df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:12 [async_llm.py:261] Added request cmpl-e158bcfd01ee493eb352191a1879b6df-0.
INFO 03-01 23:36:14 [logger.py:42] Received request cmpl-553e150d89fc4b84b07725a883c87acc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:14 [async_llm.py:261] Added request cmpl-553e150d89fc4b84b07725a883c87acc-0.
INFO 03-01 23:36:15 [logger.py:42] Received request cmpl-cd9b5f9ecf78429586fb97c00580abeb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:15 [async_llm.py:261] Added request cmpl-cd9b5f9ecf78429586fb97c00580abeb-0.
INFO 03-01 23:36:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:36:16 [logger.py:42] Received request cmpl-34acbdb759214fc190ad26e98bcd5b8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:16 [async_llm.py:261] Added request cmpl-34acbdb759214fc190ad26e98bcd5b8b-0.
INFO 03-01 23:36:17 [logger.py:42] Received request cmpl-7b652f0ec56d48b39cd40d140a294c8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:17 [async_llm.py:261] Added request cmpl-7b652f0ec56d48b39cd40d140a294c8a-0.
INFO 03-01 23:36:18 [logger.py:42] Received request cmpl-b076509ceca242e1877a50628b439f07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:18 [async_llm.py:261] Added request cmpl-b076509ceca242e1877a50628b439f07-0.
INFO 03-01 23:36:19 [logger.py:42] Received request cmpl-8e8067a2cf8c49e4a30364729e5dd976-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:19 [async_llm.py:261] Added request cmpl-8e8067a2cf8c49e4a30364729e5dd976-0.
INFO 03-01 23:36:20 [logger.py:42] Received request cmpl-ebbeaed06b7a490b8566c5342a7d8d78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:20 [async_llm.py:261] Added request cmpl-ebbeaed06b7a490b8566c5342a7d8d78-0.
INFO 03-01 23:36:21 [logger.py:42] Received request cmpl-3bf9cc75cf2e4a628558dac812f2a649-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:21 [async_llm.py:261] Added request cmpl-3bf9cc75cf2e4a628558dac812f2a649-0.
INFO 03-01 23:36:22 [logger.py:42] Received request cmpl-e18950206ca04398a1eb7389ab08cafe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:22 [async_llm.py:261] Added request cmpl-e18950206ca04398a1eb7389ab08cafe-0.
INFO 03-01 23:36:23 [logger.py:42] Received request cmpl-c28777b7899e456b838c864fc4c96dd9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:23 [async_llm.py:261] Added request cmpl-c28777b7899e456b838c864fc4c96dd9-0.
INFO 03-01 23:36:24 [logger.py:42] Received request cmpl-8cb191b517a347078c090303eb3073cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:24 [async_llm.py:261] Added request cmpl-8cb191b517a347078c090303eb3073cd-0.
INFO 03-01 23:36:26 [logger.py:42] Received request cmpl-78422faf379a4bde9003fb9ed014d827-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:26 [async_llm.py:261] Added request cmpl-78422faf379a4bde9003fb9ed014d827-0.
INFO 03-01 23:36:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:36:27 [logger.py:42] Received request cmpl-87b49ac2b5c4480aa698967ead06d99b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:27 [async_llm.py:261] Added request cmpl-87b49ac2b5c4480aa698967ead06d99b-0.
INFO 03-01 23:36:28 [logger.py:42] Received request cmpl-4b8e0aa35d7d42d5ade63b084d754c94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:28 [async_llm.py:261] Added request cmpl-4b8e0aa35d7d42d5ade63b084d754c94-0.
INFO 03-01 23:36:29 [logger.py:42] Received request cmpl-dec64fc616584076b3fb9616d1584dd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:29 [async_llm.py:261] Added request cmpl-dec64fc616584076b3fb9616d1584dd5-0.
INFO 03-01 23:36:30 [logger.py:42] Received request cmpl-22cdce07c9b84ce3a344440d46cd1377-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:30 [async_llm.py:261] Added request cmpl-22cdce07c9b84ce3a344440d46cd1377-0.
INFO 03-01 23:36:31 [logger.py:42] Received request cmpl-036da8d768c94408a83824b790f78965-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:31 [async_llm.py:261] Added request cmpl-036da8d768c94408a83824b790f78965-0.
INFO 03-01 23:36:32 [logger.py:42] Received request cmpl-9d017b51f09c472580e40a8355b50edf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:32 [async_llm.py:261] Added request cmpl-9d017b51f09c472580e40a8355b50edf-0.
INFO 03-01 23:36:33 [logger.py:42] Received request cmpl-6f46b5afb7d34245837282b639424041-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:33 [async_llm.py:261] Added request cmpl-6f46b5afb7d34245837282b639424041-0.
INFO 03-01 23:36:34 [logger.py:42] Received request cmpl-a3c063bf63214a98b4a495854389edf7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:34 [async_llm.py:261] Added request cmpl-a3c063bf63214a98b4a495854389edf7-0.
INFO 03-01 23:36:35 [logger.py:42] Received request cmpl-66b448c296fa40a78aca0fc4defd0247-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:35 [async_llm.py:261] Added request cmpl-66b448c296fa40a78aca0fc4defd0247-0.
INFO 03-01 23:36:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:36:36 [logger.py:42] Received request cmpl-d6c4eb0173424ac29fbf3bd0a5c14db3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:36 [async_llm.py:261] Added request cmpl-d6c4eb0173424ac29fbf3bd0a5c14db3-0.
INFO 03-01 23:36:38 [logger.py:42] Received request cmpl-536eac6a58d04b778cdc3e210fd15a56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:38 [async_llm.py:261] Added request cmpl-536eac6a58d04b778cdc3e210fd15a56-0.
INFO 03-01 23:36:39 [logger.py:42] Received request cmpl-892af73265654d6287b21c960243109e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:39 [async_llm.py:261] Added request cmpl-892af73265654d6287b21c960243109e-0.
INFO 03-01 23:36:40 [logger.py:42] Received request cmpl-b046fa41214443e38cfe79422c7f62f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:40 [async_llm.py:261] Added request cmpl-b046fa41214443e38cfe79422c7f62f1-0.
INFO 03-01 23:36:41 [logger.py:42] Received request cmpl-2cd7dfa61ab945b5a1e77363cca269ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:41 [async_llm.py:261] Added request cmpl-2cd7dfa61ab945b5a1e77363cca269ad-0.
INFO 03-01 23:36:42 [logger.py:42] Received request cmpl-807f31bcf48941198594a86ac48b9ea8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:42 [async_llm.py:261] Added request cmpl-807f31bcf48941198594a86ac48b9ea8-0.
INFO 03-01 23:36:43 [logger.py:42] Received request cmpl-98d2a74fa2114df69ce85331697e97bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:43 [async_llm.py:261] Added request cmpl-98d2a74fa2114df69ce85331697e97bc-0.
INFO 03-01 23:36:44 [logger.py:42] Received request cmpl-c41295b889ef4975971c0db6d8125600-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:44 [async_llm.py:261] Added request cmpl-c41295b889ef4975971c0db6d8125600-0.
INFO 03-01 23:36:45 [logger.py:42] Received request cmpl-c3779aa48b47480d99930b2a64661c3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:45 [async_llm.py:261] Added request cmpl-c3779aa48b47480d99930b2a64661c3c-0.
INFO 03-01 23:36:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:36:46 [logger.py:42] Received request cmpl-c21d7a7717824c3ebf183ac13c31674e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:46 [async_llm.py:261] Added request cmpl-c21d7a7717824c3ebf183ac13c31674e-0.
INFO 03-01 23:36:47 [logger.py:42] Received request cmpl-2814368435204875b6a9b21aa16eeaa3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:47 [async_llm.py:261] Added request cmpl-2814368435204875b6a9b21aa16eeaa3-0.
INFO 03-01 23:36:48 [logger.py:42] Received request cmpl-ca74899dc0a94468a767d854e20c6c33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:48 [async_llm.py:261] Added request cmpl-ca74899dc0a94468a767d854e20c6c33-0.
INFO 03-01 23:36:49 [logger.py:42] Received request cmpl-49bfc84a20b542d3b9c1918158d977a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:49 [async_llm.py:261] Added request cmpl-49bfc84a20b542d3b9c1918158d977a1-0.
INFO 03-01 23:36:51 [logger.py:42] Received request cmpl-13cb49f4137a46dea767eca2c77db4c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:51 [async_llm.py:261] Added request cmpl-13cb49f4137a46dea767eca2c77db4c9-0.
INFO 03-01 23:36:52 [logger.py:42] Received request cmpl-9d0abe7dff7a4101b46eb4ead93c8b7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:52 [async_llm.py:261] Added request cmpl-9d0abe7dff7a4101b46eb4ead93c8b7f-0.
INFO 03-01 23:36:53 [logger.py:42] Received request cmpl-8f64c933132143639efe5b0051924e6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:53 [async_llm.py:261] Added request cmpl-8f64c933132143639efe5b0051924e6f-0.
INFO 03-01 23:36:54 [logger.py:42] Received request cmpl-f58c27e548e94b30a90c8972a674d866-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:54 [async_llm.py:261] Added request cmpl-f58c27e548e94b30a90c8972a674d866-0.
INFO 03-01 23:36:55 [logger.py:42] Received request cmpl-b9881a5cdc204992803fd3df44e15b27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:55 [async_llm.py:261] Added request cmpl-b9881a5cdc204992803fd3df44e15b27-0.
INFO 03-01 23:36:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:36:56 [logger.py:42] Received request cmpl-f6dceddf39da4ae9abec77773d607d42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:56 [async_llm.py:261] Added request cmpl-f6dceddf39da4ae9abec77773d607d42-0.
INFO 03-01 23:36:57 [logger.py:42] Received request cmpl-73322eefffdc4cf398a589ebc0048856-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:57 [async_llm.py:261] Added request cmpl-73322eefffdc4cf398a589ebc0048856-0.
INFO 03-01 23:36:58 [logger.py:42] Received request cmpl-03e1887760ae4a30809c10d654ae7b6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:58 [async_llm.py:261] Added request cmpl-03e1887760ae4a30809c10d654ae7b6e-0.
INFO 03-01 23:36:59 [logger.py:42] Received request cmpl-0aea77cd7c0249089335de27fa767338-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:59 [async_llm.py:261] Added request cmpl-0aea77cd7c0249089335de27fa767338-0.
INFO 03-01 23:37:00 [logger.py:42] Received request cmpl-b8aa7b903d2742879db11ddcd9c87913-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:00 [async_llm.py:261] Added request cmpl-b8aa7b903d2742879db11ddcd9c87913-0.
INFO 03-01 23:37:01 [logger.py:42] Received request cmpl-0d88b95747b4424faf067be12d0a8c24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:01 [async_llm.py:261] Added request cmpl-0d88b95747b4424faf067be12d0a8c24-0.
INFO 03-01 23:37:02 [logger.py:42] Received request cmpl-645d8ff6714a449480f69a9f67e0f8ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:02 [async_llm.py:261] Added request cmpl-645d8ff6714a449480f69a9f67e0f8ec-0.
INFO 03-01 23:37:04 [logger.py:42] Received request cmpl-5fa78006d511415d9a83749b24c11160-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:04 [async_llm.py:261] Added request cmpl-5fa78006d511415d9a83749b24c11160-0.
INFO 03-01 23:37:05 [logger.py:42] Received request cmpl-d770e1f8feb742c9a9926974ca34be7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:05 [async_llm.py:261] Added request cmpl-d770e1f8feb742c9a9926974ca34be7d-0.
INFO 03-01 23:37:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:37:06 [logger.py:42] Received request cmpl-4c4ba768f1e74a058fc3053a373dfbc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:06 [async_llm.py:261] Added request cmpl-4c4ba768f1e74a058fc3053a373dfbc7-0.
INFO 03-01 23:37:07 [logger.py:42] Received request cmpl-663d774a6e454d96af1f748e9abe6fdc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:07 [async_llm.py:261] Added request cmpl-663d774a6e454d96af1f748e9abe6fdc-0.
INFO 03-01 23:37:08 [logger.py:42] Received request cmpl-5d8ef340b8494f81832d0807137be92f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:08 [async_llm.py:261] Added request cmpl-5d8ef340b8494f81832d0807137be92f-0.
INFO 03-01 23:37:09 [logger.py:42] Received request cmpl-4d7c4007b7874d128cf1b1fdf24a844c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:09 [async_llm.py:261] Added request cmpl-4d7c4007b7874d128cf1b1fdf24a844c-0.
INFO 03-01 23:37:10 [logger.py:42] Received request cmpl-66af562999d14e9483254f668ad3b695-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:10 [async_llm.py:261] Added request cmpl-66af562999d14e9483254f668ad3b695-0.
INFO 03-01 23:37:11 [logger.py:42] Received request cmpl-7e1df12b77b54053aed4728cdbca9ccc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:11 [async_llm.py:261] Added request cmpl-7e1df12b77b54053aed4728cdbca9ccc-0.
INFO 03-01 23:37:12 [logger.py:42] Received request cmpl-131442f08cd64b228bdbf9819dd07811-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:12 [async_llm.py:261] Added request cmpl-131442f08cd64b228bdbf9819dd07811-0.
INFO 03-01 23:37:13 [logger.py:42] Received request cmpl-cd1e6ddab8344f409ca06fce8879e8ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:13 [async_llm.py:261] Added request cmpl-cd1e6ddab8344f409ca06fce8879e8ed-0.
INFO 03-01 23:37:14 [logger.py:42] Received request cmpl-da3404013f7b46609e21724385e170fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:14 [async_llm.py:261] Added request cmpl-da3404013f7b46609e21724385e170fa-0.
INFO 03-01 23:37:16 [logger.py:42] Received request cmpl-398a65daf7c44c09838539ed46a58069-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:16 [async_llm.py:261] Added request cmpl-398a65daf7c44c09838539ed46a58069-0.
INFO 03-01 23:37:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:37:17 [logger.py:42] Received request cmpl-871cac4a6b054b8986fdf44b701f88aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:17 [async_llm.py:261] Added request cmpl-871cac4a6b054b8986fdf44b701f88aa-0.
INFO 03-01 23:37:18 [logger.py:42] Received request cmpl-c7f82d7bc424405fb174b069f66de8a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:18 [async_llm.py:261] Added request cmpl-c7f82d7bc424405fb174b069f66de8a0-0.
INFO 03-01 23:37:19 [logger.py:42] Received request cmpl-51202b08795f4d94aa34f33c2d5a8d40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:19 [async_llm.py:261] Added request cmpl-51202b08795f4d94aa34f33c2d5a8d40-0.
INFO 03-01 23:37:20 [logger.py:42] Received request cmpl-42bdab5dca7842cfb266b02e57970037-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:20 [async_llm.py:261] Added request cmpl-42bdab5dca7842cfb266b02e57970037-0.
INFO 03-01 23:37:21 [logger.py:42] Received request cmpl-b4aed279111a487db2481b1063b26cb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:21 [async_llm.py:261] Added request cmpl-b4aed279111a487db2481b1063b26cb0-0.
INFO 03-01 23:37:22 [logger.py:42] Received request cmpl-e910e6a3c05a4e65be125ca27c15b32d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:22 [async_llm.py:261] Added request cmpl-e910e6a3c05a4e65be125ca27c15b32d-0.
INFO 03-01 23:37:23 [logger.py:42] Received request cmpl-c6449c2a561e43719027c9eba83d67ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:23 [async_llm.py:261] Added request cmpl-c6449c2a561e43719027c9eba83d67ec-0.
INFO 03-01 23:37:24 [logger.py:42] Received request cmpl-da82ba4a10bb486597381a6a3c54141f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:24 [async_llm.py:261] Added request cmpl-da82ba4a10bb486597381a6a3c54141f-0.
INFO 03-01 23:37:25 [logger.py:42] Received request cmpl-fe4ddb76dfc34db4afd2286d6c4922c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:25 [async_llm.py:261] Added request cmpl-fe4ddb76dfc34db4afd2286d6c4922c5-0.
INFO 03-01 23:37:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:37:26 [logger.py:42] Received request cmpl-433c3b3fe618498c89a61acaf458e72f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:26 [async_llm.py:261] Added request cmpl-433c3b3fe618498c89a61acaf458e72f-0.
INFO 03-01 23:37:27 [logger.py:42] Received request cmpl-0a412996b75e4faf8f0665e151ddc2db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:27 [async_llm.py:261] Added request cmpl-0a412996b75e4faf8f0665e151ddc2db-0.
INFO 03-01 23:37:29 [logger.py:42] Received request cmpl-dec2e3f4a4234a54a85794ab3c1a0b1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:29 [async_llm.py:261] Added request cmpl-dec2e3f4a4234a54a85794ab3c1a0b1f-0.
INFO 03-01 23:37:30 [logger.py:42] Received request cmpl-5b25e18186d54265b59b4458097e6f12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:30 [async_llm.py:261] Added request cmpl-5b25e18186d54265b59b4458097e6f12-0.
INFO 03-01 23:37:31 [logger.py:42] Received request cmpl-147e787bdab74ab2a34365af61b791b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:31 [async_llm.py:261] Added request cmpl-147e787bdab74ab2a34365af61b791b4-0.
INFO 03-01 23:37:32 [logger.py:42] Received request cmpl-1e5019e9092c42739bbb04d16151de88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:32 [async_llm.py:261] Added request cmpl-1e5019e9092c42739bbb04d16151de88-0.
INFO 03-01 23:37:33 [logger.py:42] Received request cmpl-227fc653eb2b4d9ba3a402c5ff22d088-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:33 [async_llm.py:261] Added request cmpl-227fc653eb2b4d9ba3a402c5ff22d088-0.
INFO 03-01 23:37:34 [logger.py:42] Received request cmpl-5dcfbbe1bc194b4296533b7e4ca88596-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:34 [async_llm.py:261] Added request cmpl-5dcfbbe1bc194b4296533b7e4ca88596-0.
INFO 03-01 23:37:35 [logger.py:42] Received request cmpl-50c59542ae944bfc9801898f407be188-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:35 [async_llm.py:261] Added request cmpl-50c59542ae944bfc9801898f407be188-0.
INFO 03-01 23:37:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:37:36 [logger.py:42] Received request cmpl-c7744caf3dc94a1eba44f6051ca15ef7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:36 [async_llm.py:261] Added request cmpl-c7744caf3dc94a1eba44f6051ca15ef7-0.
INFO 03-01 23:37:37 [logger.py:42] Received request cmpl-6e1efc9f79474ab382a8a2f008b14533-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:37 [async_llm.py:261] Added request cmpl-6e1efc9f79474ab382a8a2f008b14533-0.
INFO 03-01 23:37:38 [logger.py:42] Received request cmpl-7b1b78893e3d46508455c03cb11115f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:38 [async_llm.py:261] Added request cmpl-7b1b78893e3d46508455c03cb11115f9-0.
INFO 03-01 23:37:39 [logger.py:42] Received request cmpl-9d8c6b04267b4ea296cf25de9410937d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:39 [async_llm.py:261] Added request cmpl-9d8c6b04267b4ea296cf25de9410937d-0.
INFO 03-01 23:37:41 [logger.py:42] Received request cmpl-5bba8323c27c4f57a4987911a0f13564-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:41 [async_llm.py:261] Added request cmpl-5bba8323c27c4f57a4987911a0f13564-0.
INFO 03-01 23:37:42 [logger.py:42] Received request cmpl-d3e010c7df674efca8412f2f2e46fe4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:42 [async_llm.py:261] Added request cmpl-d3e010c7df674efca8412f2f2e46fe4f-0.
INFO 03-01 23:37:43 [logger.py:42] Received request cmpl-b521c396d3854e53aa9ea6b99cbb9224-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:43 [async_llm.py:261] Added request cmpl-b521c396d3854e53aa9ea6b99cbb9224-0.
INFO 03-01 23:37:44 [logger.py:42] Received request cmpl-aaf468f3ba1b4b1dbde7546cc401dee2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:44 [async_llm.py:261] Added request cmpl-aaf468f3ba1b4b1dbde7546cc401dee2-0.
INFO 03-01 23:37:45 [logger.py:42] Received request cmpl-b3dbbc946e2249e094cc1b9a8f4e0308-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:45 [async_llm.py:261] Added request cmpl-b3dbbc946e2249e094cc1b9a8f4e0308-0.
INFO 03-01 23:37:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:37:46 [logger.py:42] Received request cmpl-37be80f43dfb4839a0e57ac55ae099ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:46 [async_llm.py:261] Added request cmpl-37be80f43dfb4839a0e57ac55ae099ae-0.
INFO 03-01 23:37:47 [logger.py:42] Received request cmpl-983a6196f16d4d828ba06fce4a8cbf17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:47 [async_llm.py:261] Added request cmpl-983a6196f16d4d828ba06fce4a8cbf17-0.
INFO 03-01 23:37:48 [logger.py:42] Received request cmpl-b3fbe432f2a6458282996ccaaa1d7957-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:48 [async_llm.py:261] Added request cmpl-b3fbe432f2a6458282996ccaaa1d7957-0.
INFO 03-01 23:37:49 [logger.py:42] Received request cmpl-aa2ec58142cd40c2b8042fd995bd0aaa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:49 [async_llm.py:261] Added request cmpl-aa2ec58142cd40c2b8042fd995bd0aaa-0.
INFO 03-01 23:37:50 [logger.py:42] Received request cmpl-891345dc8ceb483eac460aebb6d86b16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:50 [async_llm.py:261] Added request cmpl-891345dc8ceb483eac460aebb6d86b16-0.
INFO 03-01 23:37:51 [logger.py:42] Received request cmpl-0a588f6636314edebfb21da7594629aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:51 [async_llm.py:261] Added request cmpl-0a588f6636314edebfb21da7594629aa-0.
INFO 03-01 23:37:52 [logger.py:42] Received request cmpl-94fd796cbecf431f9521374a2c3d5fd6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:52 [async_llm.py:261] Added request cmpl-94fd796cbecf431f9521374a2c3d5fd6-0.
INFO 03-01 23:37:54 [logger.py:42] Received request cmpl-6ea56ee170d144fd8a50230dfc9d7da6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:54 [async_llm.py:261] Added request cmpl-6ea56ee170d144fd8a50230dfc9d7da6-0.
INFO 03-01 23:37:55 [logger.py:42] Received request cmpl-50b9693f7246464499aa940cde12ec99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:55 [async_llm.py:261] Added request cmpl-50b9693f7246464499aa940cde12ec99-0.
INFO 03-01 23:37:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:37:56 [logger.py:42] Received request cmpl-44e88b657e274f5ab082b161fb810d02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:56 [async_llm.py:261] Added request cmpl-44e88b657e274f5ab082b161fb810d02-0.
INFO 03-01 23:37:57 [logger.py:42] Received request cmpl-fbe8b1db48384b34b056561a82664ccb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:57 [async_llm.py:261] Added request cmpl-fbe8b1db48384b34b056561a82664ccb-0.
INFO 03-01 23:37:58 [logger.py:42] Received request cmpl-317a73563bca4d1899e4d2de92d44823-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:58 [async_llm.py:261] Added request cmpl-317a73563bca4d1899e4d2de92d44823-0.
INFO 03-01 23:37:59 [logger.py:42] Received request cmpl-1080d6743ef041f3aaa330e200285160-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:59 [async_llm.py:261] Added request cmpl-1080d6743ef041f3aaa330e200285160-0.
INFO 03-01 23:38:00 [logger.py:42] Received request cmpl-db8fa120da894634960ef39febc961dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:00 [async_llm.py:261] Added request cmpl-db8fa120da894634960ef39febc961dd-0.
INFO 03-01 23:38:01 [logger.py:42] Received request cmpl-852caaa8ed1e47e78e3e24537cb46f1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:01 [async_llm.py:261] Added request cmpl-852caaa8ed1e47e78e3e24537cb46f1f-0.
INFO 03-01 23:38:02 [logger.py:42] Received request cmpl-3c699cc86c5b4028937364aa136ab533-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:02 [async_llm.py:261] Added request cmpl-3c699cc86c5b4028937364aa136ab533-0.
INFO 03-01 23:38:03 [logger.py:42] Received request cmpl-58261c98e04447aa854f3ad364d13896-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:03 [async_llm.py:261] Added request cmpl-58261c98e04447aa854f3ad364d13896-0.
INFO 03-01 23:38:04 [logger.py:42] Received request cmpl-140d8eecf5ac44c29a25d87c603ad446-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:04 [async_llm.py:261] Added request cmpl-140d8eecf5ac44c29a25d87c603ad446-0.
INFO 03-01 23:38:05 [logger.py:42] Received request cmpl-442a916dbda44c47a09bec2746dc7567-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:05 [async_llm.py:261] Added request cmpl-442a916dbda44c47a09bec2746dc7567-0.
INFO 03-01 23:38:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:38:07 [logger.py:42] Received request cmpl-c3da090e5d6341c987dbce4c4f3e7152-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:07 [async_llm.py:261] Added request cmpl-c3da090e5d6341c987dbce4c4f3e7152-0.
INFO 03-01 23:38:08 [logger.py:42] Received request cmpl-5d30f320cc3f46609bc69b93ee702c13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:08 [async_llm.py:261] Added request cmpl-5d30f320cc3f46609bc69b93ee702c13-0.
INFO 03-01 23:38:09 [logger.py:42] Received request cmpl-88eb1ecfaf364243ab621485888303ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:09 [async_llm.py:261] Added request cmpl-88eb1ecfaf364243ab621485888303ce-0.
INFO 03-01 23:38:10 [logger.py:42] Received request cmpl-e9fc7fcd1f4447a0bdf1b818ae97d72f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:10 [async_llm.py:261] Added request cmpl-e9fc7fcd1f4447a0bdf1b818ae97d72f-0.
INFO 03-01 23:38:11 [logger.py:42] Received request cmpl-87175bb381e6470b8b609407e24891ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:11 [async_llm.py:261] Added request cmpl-87175bb381e6470b8b609407e24891ee-0.
INFO 03-01 23:38:12 [logger.py:42] Received request cmpl-ae9aec0ce277449795b830ffbcbb0d4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:12 [async_llm.py:261] Added request cmpl-ae9aec0ce277449795b830ffbcbb0d4b-0.
INFO 03-01 23:38:13 [logger.py:42] Received request cmpl-36476351694b45be92add7ff1f58bc38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:13 [async_llm.py:261] Added request cmpl-36476351694b45be92add7ff1f58bc38-0.
INFO 03-01 23:38:14 [logger.py:42] Received request cmpl-43639cbf64a3490f8c70d246a5551cc8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:14 [async_llm.py:261] Added request cmpl-43639cbf64a3490f8c70d246a5551cc8-0.
INFO 03-01 23:38:15 [logger.py:42] Received request cmpl-242a119ea4f84b3e99bbb288415cbbc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:15 [async_llm.py:261] Added request cmpl-242a119ea4f84b3e99bbb288415cbbc3-0.
INFO 03-01 23:38:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:38:16 [logger.py:42] Received request cmpl-ca001bb12ae6403da4dfe3c51cb5441d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:16 [async_llm.py:261] Added request cmpl-ca001bb12ae6403da4dfe3c51cb5441d-0.
INFO 03-01 23:38:17 [logger.py:42] Received request cmpl-d3c73270941d4f9fb081d5964d664546-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:17 [async_llm.py:261] Added request cmpl-d3c73270941d4f9fb081d5964d664546-0.
INFO 03-01 23:38:18 [logger.py:42] Received request cmpl-383f0465497f41be8758cf4e60e2f76f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:18 [async_llm.py:261] Added request cmpl-383f0465497f41be8758cf4e60e2f76f-0.
INFO 03-01 23:38:20 [logger.py:42] Received request cmpl-b3e137fcbe644846a38ad44317d8a0b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:20 [async_llm.py:261] Added request cmpl-b3e137fcbe644846a38ad44317d8a0b0-0.
INFO 03-01 23:38:21 [logger.py:42] Received request cmpl-3d9cd5142052494ca71764e00ef6c9c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:21 [async_llm.py:261] Added request cmpl-3d9cd5142052494ca71764e00ef6c9c8-0.
INFO 03-01 23:38:22 [logger.py:42] Received request cmpl-a2bd0a33890143de9627e8dc903145b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:22 [async_llm.py:261] Added request cmpl-a2bd0a33890143de9627e8dc903145b4-0.
INFO 03-01 23:38:23 [logger.py:42] Received request cmpl-93e7952a585a466382c6a3aa533843be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:23 [async_llm.py:261] Added request cmpl-93e7952a585a466382c6a3aa533843be-0.
INFO 03-01 23:38:24 [logger.py:42] Received request cmpl-0ce43b1407ce4ac4a69173717c58ece9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:24 [async_llm.py:261] Added request cmpl-0ce43b1407ce4ac4a69173717c58ece9-0.
INFO 03-01 23:38:25 [logger.py:42] Received request cmpl-42f5806ec51b4417a191dd7d91a2bdc2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:25 [async_llm.py:261] Added request cmpl-42f5806ec51b4417a191dd7d91a2bdc2-0.
INFO 03-01 23:38:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:38:26 [logger.py:42] Received request cmpl-4f22c3639932448981b480fc462e8e14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:26 [async_llm.py:261] Added request cmpl-4f22c3639932448981b480fc462e8e14-0.
INFO 03-01 23:38:27 [logger.py:42] Received request cmpl-8f513b5e935846b29787f86482516f41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:27 [async_llm.py:261] Added request cmpl-8f513b5e935846b29787f86482516f41-0.
INFO 03-01 23:38:28 [logger.py:42] Received request cmpl-ce438ce98f814c9c9c1ce13a8e11446b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:28 [async_llm.py:261] Added request cmpl-ce438ce98f814c9c9c1ce13a8e11446b-0.
INFO 03-01 23:38:29 [logger.py:42] Received request cmpl-d58b53cce6c54d9eb3cd5bd6698b90fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:29 [async_llm.py:261] Added request cmpl-d58b53cce6c54d9eb3cd5bd6698b90fe-0.
INFO 03-01 23:38:30 [logger.py:42] Received request cmpl-1897807258d74345b72f27e5b817c429-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:30 [async_llm.py:261] Added request cmpl-1897807258d74345b72f27e5b817c429-0.
INFO 03-01 23:38:31 [logger.py:42] Received request cmpl-033cd165e2ae46679a59d5e56f349fbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:31 [async_llm.py:261] Added request cmpl-033cd165e2ae46679a59d5e56f349fbd-0.
INFO 03-01 23:38:33 [logger.py:42] Received request cmpl-e94c4443a5644c52b830a1ba1bd68189-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:33 [async_llm.py:261] Added request cmpl-e94c4443a5644c52b830a1ba1bd68189-0.
INFO 03-01 23:38:34 [logger.py:42] Received request cmpl-b779ed6f20124fad92270ca6d97ac4a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:34 [async_llm.py:261] Added request cmpl-b779ed6f20124fad92270ca6d97ac4a1-0.
INFO 03-01 23:38:35 [logger.py:42] Received request cmpl-52c25917ecb943e9b068333e4664dd0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:35 [async_llm.py:261] Added request cmpl-52c25917ecb943e9b068333e4664dd0b-0.
INFO 03-01 23:38:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:38:36 [logger.py:42] Received request cmpl-3c946cc5653a477ba2847b3d0599c720-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:36 [async_llm.py:261] Added request cmpl-3c946cc5653a477ba2847b3d0599c720-0.
INFO 03-01 23:38:37 [logger.py:42] Received request cmpl-fc143db58f024649b6a75d17d3d56b00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:37 [async_llm.py:261] Added request cmpl-fc143db58f024649b6a75d17d3d56b00-0.
INFO 03-01 23:38:38 [logger.py:42] Received request cmpl-0309858d412e463a922612aba1a32a24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:38 [async_llm.py:261] Added request cmpl-0309858d412e463a922612aba1a32a24-0.
INFO 03-01 23:38:39 [logger.py:42] Received request cmpl-d4e6f317233f45b8920d8e09e9c9a8b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:39 [async_llm.py:261] Added request cmpl-d4e6f317233f45b8920d8e09e9c9a8b7-0.
INFO 03-01 23:38:40 [logger.py:42] Received request cmpl-848b40bab0cd439abd7b0963491323ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:40 [async_llm.py:261] Added request cmpl-848b40bab0cd439abd7b0963491323ea-0.
INFO 03-01 23:38:41 [logger.py:42] Received request cmpl-e5dd1b92ce624f27b0edb6575c339a3e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:41 [async_llm.py:261] Added request cmpl-e5dd1b92ce624f27b0edb6575c339a3e-0.
INFO 03-01 23:38:42 [logger.py:42] Received request cmpl-a43f177068264eeab01c1ed373c96747-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:42 [async_llm.py:261] Added request cmpl-a43f177068264eeab01c1ed373c96747-0.
INFO 03-01 23:38:43 [logger.py:42] Received request cmpl-3c543c044b624d3cb147b0b2a92133d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:43 [async_llm.py:261] Added request cmpl-3c543c044b624d3cb147b0b2a92133d3-0.
INFO 03-01 23:38:45 [logger.py:42] Received request cmpl-5f8351c7212e4abc80ecac84fa84541f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:45 [async_llm.py:261] Added request cmpl-5f8351c7212e4abc80ecac84fa84541f-0.
INFO 03-01 23:38:46 [logger.py:42] Received request cmpl-fe117ea097474f79ba38b5c84c8734b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:46 [async_llm.py:261] Added request cmpl-fe117ea097474f79ba38b5c84c8734b3-0.
INFO 03-01 23:38:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-01 23:38:47 [logger.py:42] Received request cmpl-0d606cc7330045afa430b86a88ab3655-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:47 [async_llm.py:261] Added request cmpl-0d606cc7330045afa430b86a88ab3655-0.
INFO 03-01 23:38:48 [logger.py:42] Received request cmpl-e042591467254b0d809a910b93be137f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:48 [async_llm.py:261] Added request cmpl-e042591467254b0d809a910b93be137f-0.
INFO 03-01 23:38:49 [logger.py:42] Received request cmpl-15935954851340ccac5aa40c776abedf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:49 [async_llm.py:261] Added request cmpl-15935954851340ccac5aa40c776abedf-0.
INFO 03-01 23:38:50 [logger.py:42] Received request cmpl-d22b0aaae103482db1443ca9cd424aba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:50 [async_llm.py:261] Added request cmpl-d22b0aaae103482db1443ca9cd424aba-0.
INFO 03-01 23:38:51 [logger.py:42] Received request cmpl-25fec775f7fa4ad596b006b2e1f2af20-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:51 [async_llm.py:261] Added request cmpl-25fec775f7fa4ad596b006b2e1f2af20-0.
INFO 03-01 23:38:52 [logger.py:42] Received request cmpl-448b870390c14f34b654ca45ae4f2d81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:52 [async_llm.py:261] Added request cmpl-448b870390c14f34b654ca45ae4f2d81-0.
INFO 03-01 23:38:53 [logger.py:42] Received request cmpl-eae19a2d508a41c4bddf95303574daa1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:53 [async_llm.py:261] Added request cmpl-eae19a2d508a41c4bddf95303574daa1-0.
INFO 03-01 23:38:54 [logger.py:42] Received request cmpl-76aec8132cba4002838b0dd7e462506d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:54 [async_llm.py:261] Added request cmpl-76aec8132cba4002838b0dd7e462506d-0.
INFO 03-01 23:38:55 [logger.py:42] Received request cmpl-ab960c781b994aa89edc45875e62a4b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:55 [async_llm.py:261] Added request cmpl-ab960c781b994aa89edc45875e62a4b6-0.
INFO 03-01 23:38:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:38:56 [logger.py:42] Received request cmpl-769e44b0861544ffb7a8ce5f2f26d16b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:56 [async_llm.py:261] Added request cmpl-769e44b0861544ffb7a8ce5f2f26d16b-0.
INFO 03-01 23:38:58 [logger.py:42] Received request cmpl-13ec4f75de284e21b5a234418e12f15f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:58 [async_llm.py:261] Added request cmpl-13ec4f75de284e21b5a234418e12f15f-0.
INFO 03-01 23:38:59 [logger.py:42] Received request cmpl-af6eacebe4364782abf45af79318b1d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:59 [async_llm.py:261] Added request cmpl-af6eacebe4364782abf45af79318b1d6-0.
INFO 03-01 23:39:00 [logger.py:42] Received request cmpl-d13595d222354cc696e69950f930daf9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:00 [async_llm.py:261] Added request cmpl-d13595d222354cc696e69950f930daf9-0.
INFO 03-01 23:39:01 [logger.py:42] Received request cmpl-4992ecb36ba141d599d235509c50ed31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:01 [async_llm.py:261] Added request cmpl-4992ecb36ba141d599d235509c50ed31-0.
INFO 03-01 23:39:02 [logger.py:42] Received request cmpl-f034074b039a43049a95cdaa23247921-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:02 [async_llm.py:261] Added request cmpl-f034074b039a43049a95cdaa23247921-0.
INFO 03-01 23:39:03 [logger.py:42] Received request cmpl-b6b0c5eef9354b9b98544f414b2049a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:03 [async_llm.py:261] Added request cmpl-b6b0c5eef9354b9b98544f414b2049a0-0.
INFO 03-01 23:39:04 [logger.py:42] Received request cmpl-1fd279adb1c94aeb8f84b3c802da8a90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:04 [async_llm.py:261] Added request cmpl-1fd279adb1c94aeb8f84b3c802da8a90-0.
INFO 03-01 23:39:05 [logger.py:42] Received request cmpl-47144d438b8b4e3985378254c406d3f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:05 [async_llm.py:261] Added request cmpl-47144d438b8b4e3985378254c406d3f6-0.
INFO 03-01 23:39:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:39:06 [logger.py:42] Received request cmpl-adf0bb64885e4c4882c8f2a38905482e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:06 [async_llm.py:261] Added request cmpl-adf0bb64885e4c4882c8f2a38905482e-0.
INFO 03-01 23:39:07 [logger.py:42] Received request cmpl-f4a453c01ba24fe39e6eb0a6a0cc1874-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:07 [async_llm.py:261] Added request cmpl-f4a453c01ba24fe39e6eb0a6a0cc1874-0.
INFO 03-01 23:39:08 [logger.py:42] Received request cmpl-9c854d99fe964a9c8bcdc4da39d6d244-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:08 [async_llm.py:261] Added request cmpl-9c854d99fe964a9c8bcdc4da39d6d244-0.
INFO 03-01 23:39:09 [logger.py:42] Received request cmpl-7dbb5f4b115045b3845d85d011d5bd97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:09 [async_llm.py:261] Added request cmpl-7dbb5f4b115045b3845d85d011d5bd97-0.
INFO 03-01 23:39:11 [logger.py:42] Received request cmpl-d85464d56f9c4ed596863a2cd29e7571-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:11 [async_llm.py:261] Added request cmpl-d85464d56f9c4ed596863a2cd29e7571-0.
INFO 03-01 23:39:12 [logger.py:42] Received request cmpl-fdf7bc5c43d345df91d61745f43c6fa3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:12 [async_llm.py:261] Added request cmpl-fdf7bc5c43d345df91d61745f43c6fa3-0.
INFO 03-01 23:39:13 [logger.py:42] Received request cmpl-83bed504f1b14564bfcc0a5411fd8b25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:13 [async_llm.py:261] Added request cmpl-83bed504f1b14564bfcc0a5411fd8b25-0.
INFO 03-01 23:39:14 [logger.py:42] Received request cmpl-62625519b3004c1d981e7a163f0b54a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:14 [async_llm.py:261] Added request cmpl-62625519b3004c1d981e7a163f0b54a0-0.
INFO 03-01 23:39:15 [logger.py:42] Received request cmpl-36e06119a9044d3cb3d5557c8a15b6a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:15 [async_llm.py:261] Added request cmpl-36e06119a9044d3cb3d5557c8a15b6a8-0.
INFO 03-01 23:39:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:39:16 [logger.py:42] Received request cmpl-cfcd6b586c76429d9a2731278607f21e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:16 [async_llm.py:261] Added request cmpl-cfcd6b586c76429d9a2731278607f21e-0.
INFO 03-01 23:39:17 [logger.py:42] Received request cmpl-bc0ea77a6c414508bcd1ef4efedf1591-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:17 [async_llm.py:261] Added request cmpl-bc0ea77a6c414508bcd1ef4efedf1591-0.
INFO 03-01 23:39:18 [logger.py:42] Received request cmpl-584ba85228f24bd184f425efd175e97f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:18 [async_llm.py:261] Added request cmpl-584ba85228f24bd184f425efd175e97f-0.
INFO 03-01 23:39:19 [logger.py:42] Received request cmpl-d10ed867626b47b3aac2a05ca1eb40b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:19 [async_llm.py:261] Added request cmpl-d10ed867626b47b3aac2a05ca1eb40b5-0.
INFO 03-01 23:39:20 [logger.py:42] Received request cmpl-400afcec887d46229999ab8928c4e29a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:20 [async_llm.py:261] Added request cmpl-400afcec887d46229999ab8928c4e29a-0.
INFO 03-01 23:39:21 [logger.py:42] Received request cmpl-944e6f72c7c841f290e38580abbcb07d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:21 [async_llm.py:261] Added request cmpl-944e6f72c7c841f290e38580abbcb07d-0.
INFO 03-01 23:39:22 [logger.py:42] Received request cmpl-4d808172bf8948cf92e6160a6c11c9ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:22 [async_llm.py:261] Added request cmpl-4d808172bf8948cf92e6160a6c11c9ba-0.
INFO 03-01 23:39:24 [logger.py:42] Received request cmpl-d749dfdd0ff9444babd3816af1eca289-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:24 [async_llm.py:261] Added request cmpl-d749dfdd0ff9444babd3816af1eca289-0.
INFO 03-01 23:39:25 [logger.py:42] Received request cmpl-1f7c0a1fa2fa4893b272f312763fa254-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:25 [async_llm.py:261] Added request cmpl-1f7c0a1fa2fa4893b272f312763fa254-0.
INFO 03-01 23:39:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:39:26 [logger.py:42] Received request cmpl-66a1052740ea4f02a421089bf0090334-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:26 [async_llm.py:261] Added request cmpl-66a1052740ea4f02a421089bf0090334-0.
INFO 03-01 23:39:27 [logger.py:42] Received request cmpl-525979fc11b94a0182cd6b5372be6398-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:27 [async_llm.py:261] Added request cmpl-525979fc11b94a0182cd6b5372be6398-0.
INFO 03-01 23:39:28 [logger.py:42] Received request cmpl-76d5acc7b9dc4481beed99fca9a96b6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:28 [async_llm.py:261] Added request cmpl-76d5acc7b9dc4481beed99fca9a96b6d-0.
INFO 03-01 23:39:29 [logger.py:42] Received request cmpl-876a3f7c6251491fbbbecaf80c8315b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:29 [async_llm.py:261] Added request cmpl-876a3f7c6251491fbbbecaf80c8315b3-0.
INFO 03-01 23:39:30 [logger.py:42] Received request cmpl-afd5122014c446aeb97d8b9c08a13688-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:30 [async_llm.py:261] Added request cmpl-afd5122014c446aeb97d8b9c08a13688-0.
INFO 03-01 23:39:31 [logger.py:42] Received request cmpl-699b99e7e2f04376978b56aba335f791-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:31 [async_llm.py:261] Added request cmpl-699b99e7e2f04376978b56aba335f791-0.
INFO 03-01 23:39:32 [logger.py:42] Received request cmpl-9f130a771c99448db517e4ff90497594-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:32 [async_llm.py:261] Added request cmpl-9f130a771c99448db517e4ff90497594-0.
INFO 03-01 23:39:33 [logger.py:42] Received request cmpl-01e1b03b6e9d46a6acb4fe36c9ce0ad0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:33 [async_llm.py:261] Added request cmpl-01e1b03b6e9d46a6acb4fe36c9ce0ad0-0.
INFO 03-01 23:39:34 [logger.py:42] Received request cmpl-1268eaa6e5c8490da427090fa3e0b643-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:34 [async_llm.py:261] Added request cmpl-1268eaa6e5c8490da427090fa3e0b643-0.
INFO 03-01 23:39:35 [logger.py:42] Received request cmpl-d830aba07370487387e1cdc666e7fc8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:35 [async_llm.py:261] Added request cmpl-d830aba07370487387e1cdc666e7fc8c-0.
INFO 03-01 23:39:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:39:37 [logger.py:42] Received request cmpl-e6953a49ae304a9a94147925b8b482be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:37 [async_llm.py:261] Added request cmpl-e6953a49ae304a9a94147925b8b482be-0.
INFO 03-01 23:39:38 [logger.py:42] Received request cmpl-07bbc793eff343dd92d17e4234a3be59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:38 [async_llm.py:261] Added request cmpl-07bbc793eff343dd92d17e4234a3be59-0.
INFO 03-01 23:39:39 [logger.py:42] Received request cmpl-bb62fbae05624560b4b9bc8aafe229f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:39 [async_llm.py:261] Added request cmpl-bb62fbae05624560b4b9bc8aafe229f5-0.
INFO 03-01 23:39:40 [logger.py:42] Received request cmpl-9f4a1bc1bc3d48f0919182917aa76bfe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:40 [async_llm.py:261] Added request cmpl-9f4a1bc1bc3d48f0919182917aa76bfe-0.
INFO 03-01 23:39:41 [logger.py:42] Received request cmpl-bb67436b765e45fb9d0a5ede412d80de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:41 [async_llm.py:261] Added request cmpl-bb67436b765e45fb9d0a5ede412d80de-0.
INFO 03-01 23:39:42 [logger.py:42] Received request cmpl-10909e76ecde4eca8ede91b36dc8dcad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:42 [async_llm.py:261] Added request cmpl-10909e76ecde4eca8ede91b36dc8dcad-0.
INFO 03-01 23:39:43 [logger.py:42] Received request cmpl-93da752f9d9d402785b11ee97424bd2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:43 [async_llm.py:261] Added request cmpl-93da752f9d9d402785b11ee97424bd2c-0.
INFO 03-01 23:39:44 [logger.py:42] Received request cmpl-5b66fe77f7e843e7ba1716448f20d397-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:44 [async_llm.py:261] Added request cmpl-5b66fe77f7e843e7ba1716448f20d397-0.
INFO 03-01 23:39:45 [logger.py:42] Received request cmpl-55805e7b383a495988247c5e63e3cfc2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:45 [async_llm.py:261] Added request cmpl-55805e7b383a495988247c5e63e3cfc2-0.
INFO 03-01 23:39:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:39:46 [logger.py:42] Received request cmpl-4ac34d370aab43fc8d3e5475ba356ac9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:46 [async_llm.py:261] Added request cmpl-4ac34d370aab43fc8d3e5475ba356ac9-0.
INFO 03-01 23:39:47 [logger.py:42] Received request cmpl-dcd66929cb4746deadac6329f3139c12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:47 [async_llm.py:261] Added request cmpl-dcd66929cb4746deadac6329f3139c12-0.
INFO 03-01 23:39:48 [logger.py:42] Received request cmpl-9f61412e444f45068cf0c8e7c73d4732-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:48 [async_llm.py:261] Added request cmpl-9f61412e444f45068cf0c8e7c73d4732-0.
INFO 03-01 23:39:50 [logger.py:42] Received request cmpl-fcb1a86b02ad4be9a562b8da80428b4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:50 [async_llm.py:261] Added request cmpl-fcb1a86b02ad4be9a562b8da80428b4a-0.
INFO 03-01 23:39:51 [logger.py:42] Received request cmpl-7e51ea3cd021430ca1650cea15d068cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:51 [async_llm.py:261] Added request cmpl-7e51ea3cd021430ca1650cea15d068cb-0.
INFO 03-01 23:39:52 [logger.py:42] Received request cmpl-43f3ee45964e486cac1029647e0050e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:52 [async_llm.py:261] Added request cmpl-43f3ee45964e486cac1029647e0050e9-0.
INFO 03-01 23:39:53 [logger.py:42] Received request cmpl-e9e89e3fcc974f0ba6b8b4baa9a0f066-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:53 [async_llm.py:261] Added request cmpl-e9e89e3fcc974f0ba6b8b4baa9a0f066-0.
INFO 03-01 23:39:54 [logger.py:42] Received request cmpl-cd0f2cac71624659a8da8f35bde6c508-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:54 [async_llm.py:261] Added request cmpl-cd0f2cac71624659a8da8f35bde6c508-0.
INFO 03-01 23:39:55 [logger.py:42] Received request cmpl-fd5c5b41d732457d92bac9ec21bf11e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:55 [async_llm.py:261] Added request cmpl-fd5c5b41d732457d92bac9ec21bf11e6-0.
INFO 03-01 23:39:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:39:56 [logger.py:42] Received request cmpl-580e09ef4b6f452fa88506250aed56e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:56 [async_llm.py:261] Added request cmpl-580e09ef4b6f452fa88506250aed56e0-0.
INFO 03-01 23:39:57 [logger.py:42] Received request cmpl-44d4e4001a574739af5a3d547eded484-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:57 [async_llm.py:261] Added request cmpl-44d4e4001a574739af5a3d547eded484-0.
INFO 03-01 23:39:58 [logger.py:42] Received request cmpl-6f847f1fed0845b79226f7871f0d8f4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:58 [async_llm.py:261] Added request cmpl-6f847f1fed0845b79226f7871f0d8f4f-0.
INFO 03-01 23:39:59 [logger.py:42] Received request cmpl-316b61e485d04f3aa7e7504b83541ff5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:59 [async_llm.py:261] Added request cmpl-316b61e485d04f3aa7e7504b83541ff5-0.
INFO 03-01 23:40:00 [logger.py:42] Received request cmpl-c490b099305b4e81a6f751d6b9ce2957-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:00 [async_llm.py:261] Added request cmpl-c490b099305b4e81a6f751d6b9ce2957-0.
INFO 03-01 23:40:01 [logger.py:42] Received request cmpl-646f4ce4ddf5488584a542bda7e450d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:01 [async_llm.py:261] Added request cmpl-646f4ce4ddf5488584a542bda7e450d4-0.
INFO 03-01 23:40:03 [logger.py:42] Received request cmpl-83b68cf79a1047ca94a674627d126178-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:03 [async_llm.py:261] Added request cmpl-83b68cf79a1047ca94a674627d126178-0.
INFO 03-01 23:40:04 [logger.py:42] Received request cmpl-0af34654ac12400b93f6447e05ce70a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:04 [async_llm.py:261] Added request cmpl-0af34654ac12400b93f6447e05ce70a1-0.
INFO 03-01 23:40:05 [logger.py:42] Received request cmpl-35932e0ec0cf41e4bd0ea3802b5b7b50-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:05 [async_llm.py:261] Added request cmpl-35932e0ec0cf41e4bd0ea3802b5b7b50-0.
INFO 03-01 23:40:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:40:06 [logger.py:42] Received request cmpl-b4ba086090334940ad034e68580d2564-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:06 [async_llm.py:261] Added request cmpl-b4ba086090334940ad034e68580d2564-0.
INFO 03-01 23:40:07 [logger.py:42] Received request cmpl-b3deb4a28e374bc5a1f0164672d39650-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:07 [async_llm.py:261] Added request cmpl-b3deb4a28e374bc5a1f0164672d39650-0.
INFO 03-01 23:40:08 [logger.py:42] Received request cmpl-5d7684921bbe484988f031eeffcb31bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:08 [async_llm.py:261] Added request cmpl-5d7684921bbe484988f031eeffcb31bd-0.
INFO 03-01 23:40:09 [logger.py:42] Received request cmpl-cc4d3e82b0034fddba17aba9a88d329d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:09 [async_llm.py:261] Added request cmpl-cc4d3e82b0034fddba17aba9a88d329d-0.
INFO 03-01 23:40:10 [logger.py:42] Received request cmpl-737e917ba2eb42eaaf30d9899aeb9daf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:10 [async_llm.py:261] Added request cmpl-737e917ba2eb42eaaf30d9899aeb9daf-0.
INFO 03-01 23:40:11 [logger.py:42] Received request cmpl-6ced9abaf9054b5baef176fe427eb09a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:11 [async_llm.py:261] Added request cmpl-6ced9abaf9054b5baef176fe427eb09a-0.
INFO 03-01 23:40:12 [logger.py:42] Received request cmpl-0b1b6588ad084488ae077782cf5e8071-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:12 [async_llm.py:261] Added request cmpl-0b1b6588ad084488ae077782cf5e8071-0.
INFO 03-01 23:40:13 [logger.py:42] Received request cmpl-544db674cd294fe698dc71d4bc4cf6c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:13 [async_llm.py:261] Added request cmpl-544db674cd294fe698dc71d4bc4cf6c0-0.
INFO 03-01 23:40:14 [logger.py:42] Received request cmpl-5ae000c47d0842d7b626a214a86e1853-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:14 [async_llm.py:261] Added request cmpl-5ae000c47d0842d7b626a214a86e1853-0.
INFO 03-01 23:40:16 [logger.py:42] Received request cmpl-8f08f1dee4ae4c82a1b823a566fcec1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:16 [async_llm.py:261] Added request cmpl-8f08f1dee4ae4c82a1b823a566fcec1a-0.
INFO 03-01 23:40:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:40:17 [logger.py:42] Received request cmpl-c695eff7e58f45bbbf48c0375f9921e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:17 [async_llm.py:261] Added request cmpl-c695eff7e58f45bbbf48c0375f9921e8-0.
INFO 03-01 23:40:18 [logger.py:42] Received request cmpl-e70d90a3015a4fd29061feb762d7d470-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:18 [async_llm.py:261] Added request cmpl-e70d90a3015a4fd29061feb762d7d470-0.
INFO 03-01 23:40:19 [logger.py:42] Received request cmpl-602d984a255e44f09c239eb14614f061-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:19 [async_llm.py:261] Added request cmpl-602d984a255e44f09c239eb14614f061-0.
INFO 03-01 23:40:20 [logger.py:42] Received request cmpl-66ef77ed697d45168b4b15a384ae92c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:20 [async_llm.py:261] Added request cmpl-66ef77ed697d45168b4b15a384ae92c9-0.
INFO 03-01 23:40:21 [logger.py:42] Received request cmpl-9f34e0fae77b415581c2ac5e3aa2b69c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:21 [async_llm.py:261] Added request cmpl-9f34e0fae77b415581c2ac5e3aa2b69c-0.
INFO 03-01 23:40:22 [logger.py:42] Received request cmpl-064156adbf524d00bb0534cdb0220476-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:22 [async_llm.py:261] Added request cmpl-064156adbf524d00bb0534cdb0220476-0.
INFO 03-01 23:40:23 [logger.py:42] Received request cmpl-dfc3054cbcf24601859a4b46298e4e24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:23 [async_llm.py:261] Added request cmpl-dfc3054cbcf24601859a4b46298e4e24-0.
INFO 03-01 23:40:24 [logger.py:42] Received request cmpl-2cfbcec346414de2b5b0870c36e53ace-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:24 [async_llm.py:261] Added request cmpl-2cfbcec346414de2b5b0870c36e53ace-0.
INFO 03-01 23:40:25 [logger.py:42] Received request cmpl-02534de77bc64259b1925c4c128bcd5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:25 [async_llm.py:261] Added request cmpl-02534de77bc64259b1925c4c128bcd5a-0.
INFO 03-01 23:40:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:40:26 [logger.py:42] Received request cmpl-3962440e24994b9db101000946e4ccbc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:26 [async_llm.py:261] Added request cmpl-3962440e24994b9db101000946e4ccbc-0.
INFO 03-01 23:40:27 [logger.py:42] Received request cmpl-08558b53d71b428fb67e3dac9dc59772-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:27 [async_llm.py:261] Added request cmpl-08558b53d71b428fb67e3dac9dc59772-0.
INFO 03-01 23:40:29 [logger.py:42] Received request cmpl-e5e7c333f66e47e99b1698e73fcb2ad7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:29 [async_llm.py:261] Added request cmpl-e5e7c333f66e47e99b1698e73fcb2ad7-0.
INFO 03-01 23:40:30 [logger.py:42] Received request cmpl-0da49c77ae7846fc9c8565787c0babab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:30 [async_llm.py:261] Added request cmpl-0da49c77ae7846fc9c8565787c0babab-0.
INFO 03-01 23:40:31 [logger.py:42] Received request cmpl-0ed2c426a96b433aa10be166be49fc48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:31 [async_llm.py:261] Added request cmpl-0ed2c426a96b433aa10be166be49fc48-0.
INFO 03-01 23:40:32 [logger.py:42] Received request cmpl-0bcc0e0dbc924e5bb1b0d97d5c64e272-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:32 [async_llm.py:261] Added request cmpl-0bcc0e0dbc924e5bb1b0d97d5c64e272-0.
INFO 03-01 23:40:33 [logger.py:42] Received request cmpl-c13dcc155b724b0d93ed850015482e1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:33 [async_llm.py:261] Added request cmpl-c13dcc155b724b0d93ed850015482e1f-0.
INFO 03-01 23:40:34 [logger.py:42] Received request cmpl-e30f62d84fa546cebabd88e0600214eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:34 [async_llm.py:261] Added request cmpl-e30f62d84fa546cebabd88e0600214eb-0.
INFO 03-01 23:40:35 [logger.py:42] Received request cmpl-6fd819850ab04316b85125dc6d1be3f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:35 [async_llm.py:261] Added request cmpl-6fd819850ab04316b85125dc6d1be3f9-0.
INFO 03-01 23:40:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:40:36 [logger.py:42] Received request cmpl-aa3b969c652d44739c80eb1bfab2c0c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:36 [async_llm.py:261] Added request cmpl-aa3b969c652d44739c80eb1bfab2c0c1-0.
INFO 03-01 23:40:37 [logger.py:42] Received request cmpl-01b2832fc0c844c1b21de35aca5c87ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:37 [async_llm.py:261] Added request cmpl-01b2832fc0c844c1b21de35aca5c87ed-0.
INFO 03-01 23:40:38 [logger.py:42] Received request cmpl-b5a1beab098b4219b40d4e9589b4af07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:38 [async_llm.py:261] Added request cmpl-b5a1beab098b4219b40d4e9589b4af07-0.
INFO 03-01 23:40:39 [logger.py:42] Received request cmpl-30aeec13a2f74aada4d6df227ab27ca8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:39 [async_llm.py:261] Added request cmpl-30aeec13a2f74aada4d6df227ab27ca8-0.
INFO 03-01 23:40:40 [logger.py:42] Received request cmpl-6c176012f8034e8c9dd3cbd72c0824fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:40 [async_llm.py:261] Added request cmpl-6c176012f8034e8c9dd3cbd72c0824fa-0.
INFO 03-01 23:40:42 [logger.py:42] Received request cmpl-6b39540c0aef4ce9a684903bfaab5d66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:42 [async_llm.py:261] Added request cmpl-6b39540c0aef4ce9a684903bfaab5d66-0.
INFO 03-01 23:40:43 [logger.py:42] Received request cmpl-aaace097446d4b4294ea1e5e52110e8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:43 [async_llm.py:261] Added request cmpl-aaace097446d4b4294ea1e5e52110e8b-0.
INFO 03-01 23:40:44 [logger.py:42] Received request cmpl-287d30eff1e241e2a825a4c83e4e6c19-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:44 [async_llm.py:261] Added request cmpl-287d30eff1e241e2a825a4c83e4e6c19-0.
INFO 03-01 23:40:45 [logger.py:42] Received request cmpl-180788350510407e83e0f922c4e4b93f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:45 [async_llm.py:261] Added request cmpl-180788350510407e83e0f922c4e4b93f-0.
INFO 03-01 23:40:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:40:46 [logger.py:42] Received request cmpl-f8014ee1ad2b481cbc8a4e3d36a79066-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:46 [async_llm.py:261] Added request cmpl-f8014ee1ad2b481cbc8a4e3d36a79066-0.
INFO 03-01 23:40:47 [logger.py:42] Received request cmpl-40abf597827c4c61a5bc087445ecd944-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:47 [async_llm.py:261] Added request cmpl-40abf597827c4c61a5bc087445ecd944-0.
INFO 03-01 23:40:48 [logger.py:42] Received request cmpl-43a9ece0f79a44a385e6dbb15ec7f9f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:48 [async_llm.py:261] Added request cmpl-43a9ece0f79a44a385e6dbb15ec7f9f2-0.
INFO 03-01 23:40:49 [logger.py:42] Received request cmpl-19809ced20c543ff8ad91f53a809ce21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:49 [async_llm.py:261] Added request cmpl-19809ced20c543ff8ad91f53a809ce21-0.
INFO 03-01 23:40:50 [logger.py:42] Received request cmpl-9104cd41523642e0a308afac2c89e106-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:50 [async_llm.py:261] Added request cmpl-9104cd41523642e0a308afac2c89e106-0.
INFO 03-01 23:40:51 [logger.py:42] Received request cmpl-49b58386241241789743b43100b2e2f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:51 [async_llm.py:261] Added request cmpl-49b58386241241789743b43100b2e2f2-0.
INFO 03-01 23:40:52 [logger.py:42] Received request cmpl-701f093bc6974d3db4cdea27255ac0b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:52 [async_llm.py:261] Added request cmpl-701f093bc6974d3db4cdea27255ac0b0-0.
INFO 03-01 23:40:53 [logger.py:42] Received request cmpl-a535eca7585f4f09bef8b7143d5c1c03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:53 [async_llm.py:261] Added request cmpl-a535eca7585f4f09bef8b7143d5c1c03-0.
INFO 03-01 23:40:55 [logger.py:42] Received request cmpl-cc7985aa9abe4f098ae1c1b162f511e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:55 [async_llm.py:261] Added request cmpl-cc7985aa9abe4f098ae1c1b162f511e2-0.
INFO 03-01 23:40:56 [logger.py:42] Received request cmpl-1fcedad8f2aa4d11a49ef88cde5e9abb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:56 [async_llm.py:261] Added request cmpl-1fcedad8f2aa4d11a49ef88cde5e9abb-0.
INFO 03-01 23:40:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-01 23:40:57 [logger.py:42] Received request cmpl-94440aaf656b44919a8d02cdefb81a49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:57 [async_llm.py:261] Added request cmpl-94440aaf656b44919a8d02cdefb81a49-0.
INFO 03-01 23:40:58 [logger.py:42] Received request cmpl-22688749e64e43dc98a08054ca4ab47c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:58 [async_llm.py:261] Added request cmpl-22688749e64e43dc98a08054ca4ab47c-0.
INFO 03-01 23:40:59 [logger.py:42] Received request cmpl-b20055d111644f2b98a2cb36b03c8c39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:59 [async_llm.py:261] Added request cmpl-b20055d111644f2b98a2cb36b03c8c39-0.
INFO 03-01 23:41:00 [logger.py:42] Received request cmpl-a5f7b066170f4f488d3a57a4d1abd1b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:00 [async_llm.py:261] Added request cmpl-a5f7b066170f4f488d3a57a4d1abd1b0-0.
INFO 03-01 23:41:01 [logger.py:42] Received request cmpl-10d9438b85554c8581c10946edf99a58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:01 [async_llm.py:261] Added request cmpl-10d9438b85554c8581c10946edf99a58-0.
INFO 03-01 23:41:02 [logger.py:42] Received request cmpl-eb35cb9885df441bb87be927e550aa69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:02 [async_llm.py:261] Added request cmpl-eb35cb9885df441bb87be927e550aa69-0.
INFO 03-01 23:41:03 [logger.py:42] Received request cmpl-a4693c7a412e41618266ad7d1ec2085c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:03 [async_llm.py:261] Added request cmpl-a4693c7a412e41618266ad7d1ec2085c-0.
INFO 03-01 23:41:04 [logger.py:42] Received request cmpl-5809be50a95c45d2ae5651182ea2bdcc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:04 [async_llm.py:261] Added request cmpl-5809be50a95c45d2ae5651182ea2bdcc-0.
INFO 03-01 23:41:05 [logger.py:42] Received request cmpl-141c4a9d7f79455cbffbb212d956435d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:05 [async_llm.py:261] Added request cmpl-141c4a9d7f79455cbffbb212d956435d-0.
INFO 03-01 23:41:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:41:06 [logger.py:42] Received request cmpl-47af74e072d64862ba091a198659096e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:06 [async_llm.py:261] Added request cmpl-47af74e072d64862ba091a198659096e-0.
INFO 03-01 23:41:08 [logger.py:42] Received request cmpl-722157ce4c564f0980c1b65e16c1aeba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:08 [async_llm.py:261] Added request cmpl-722157ce4c564f0980c1b65e16c1aeba-0.
INFO 03-01 23:41:09 [logger.py:42] Received request cmpl-148ca67c34f847cbaceec9ca3e14dd10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:09 [async_llm.py:261] Added request cmpl-148ca67c34f847cbaceec9ca3e14dd10-0.
INFO 03-01 23:41:10 [logger.py:42] Received request cmpl-bb3a24e73e5e446a9e1d518e3eee409f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:10 [async_llm.py:261] Added request cmpl-bb3a24e73e5e446a9e1d518e3eee409f-0.
INFO 03-01 23:41:11 [logger.py:42] Received request cmpl-7d23efd05b5f4de5aedfdc692536872c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:11 [async_llm.py:261] Added request cmpl-7d23efd05b5f4de5aedfdc692536872c-0.
INFO 03-01 23:41:12 [logger.py:42] Received request cmpl-be9846155dcd4938999669f9699a3718-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:12 [async_llm.py:261] Added request cmpl-be9846155dcd4938999669f9699a3718-0.
INFO 03-01 23:41:13 [logger.py:42] Received request cmpl-077e9528b116435ebca56d6686f3e4f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:13 [async_llm.py:261] Added request cmpl-077e9528b116435ebca56d6686f3e4f0-0.
INFO 03-01 23:41:14 [logger.py:42] Received request cmpl-194ff908a15a4617b74d2016fd77a45a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:14 [async_llm.py:261] Added request cmpl-194ff908a15a4617b74d2016fd77a45a-0.
INFO 03-01 23:41:15 [logger.py:42] Received request cmpl-062e51ee3cc04eeaa76e4b0efa3836e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:15 [async_llm.py:261] Added request cmpl-062e51ee3cc04eeaa76e4b0efa3836e6-0.
INFO 03-01 23:41:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:41:16 [logger.py:42] Received request cmpl-00ae41cf9184479bb41ba452d64f19b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:16 [async_llm.py:261] Added request cmpl-00ae41cf9184479bb41ba452d64f19b0-0.
INFO 03-01 23:41:17 [logger.py:42] Received request cmpl-4084baaccb1940db9c1e421557384959-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:17 [async_llm.py:261] Added request cmpl-4084baaccb1940db9c1e421557384959-0.
INFO 03-01 23:41:18 [logger.py:42] Received request cmpl-b3416f55f3334787a041d6848d227bfb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:18 [async_llm.py:261] Added request cmpl-b3416f55f3334787a041d6848d227bfb-0.
INFO 03-01 23:41:19 [logger.py:42] Received request cmpl-9fad591e7f8741c38606a40c51dd79f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:19 [async_llm.py:261] Added request cmpl-9fad591e7f8741c38606a40c51dd79f7-0.
INFO 03-01 23:41:21 [logger.py:42] Received request cmpl-3f28e1239ba44e83837bcdfe128ac9c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:21 [async_llm.py:261] Added request cmpl-3f28e1239ba44e83837bcdfe128ac9c8-0.
INFO 03-01 23:41:22 [logger.py:42] Received request cmpl-bc685272c4f34ec3a2c82dd70936d407-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:22 [async_llm.py:261] Added request cmpl-bc685272c4f34ec3a2c82dd70936d407-0.
INFO 03-01 23:41:23 [logger.py:42] Received request cmpl-1b5c89886f4c4b5595af355a58645711-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:23 [async_llm.py:261] Added request cmpl-1b5c89886f4c4b5595af355a58645711-0.
INFO 03-01 23:41:24 [logger.py:42] Received request cmpl-ecb0805f8e0d4e689c8fb72fd03ea4be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:24 [async_llm.py:261] Added request cmpl-ecb0805f8e0d4e689c8fb72fd03ea4be-0.
INFO 03-01 23:41:25 [logger.py:42] Received request cmpl-66edf57776bf4a03ab41fa3e013771c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:25 [async_llm.py:261] Added request cmpl-66edf57776bf4a03ab41fa3e013771c0-0.
INFO 03-01 23:41:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:41:26 [logger.py:42] Received request cmpl-b34aec9625fd49fca0f29a005c5d08b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:26 [async_llm.py:261] Added request cmpl-b34aec9625fd49fca0f29a005c5d08b3-0.
INFO 03-01 23:41:27 [logger.py:42] Received request cmpl-41c828ca844d4821aaff7bd654f1d982-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:27 [async_llm.py:261] Added request cmpl-41c828ca844d4821aaff7bd654f1d982-0.
INFO 03-01 23:41:28 [logger.py:42] Received request cmpl-eee36b3b50f8452d887e43f2b9335d40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:28 [async_llm.py:261] Added request cmpl-eee36b3b50f8452d887e43f2b9335d40-0.
INFO 03-01 23:41:29 [logger.py:42] Received request cmpl-253f995ff49e4693ae8694b500b008ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:29 [async_llm.py:261] Added request cmpl-253f995ff49e4693ae8694b500b008ab-0.
INFO 03-01 23:41:30 [logger.py:42] Received request cmpl-93b6a61c1c2d4493ac9c873811c394a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:30 [async_llm.py:261] Added request cmpl-93b6a61c1c2d4493ac9c873811c394a1-0.
INFO 03-01 23:41:31 [logger.py:42] Received request cmpl-53a4d57193f94a62b463e1621d49f901-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:31 [async_llm.py:261] Added request cmpl-53a4d57193f94a62b463e1621d49f901-0.
INFO 03-01 23:41:33 [logger.py:42] Received request cmpl-25ebb0657c6046e3a9a2152f3ea327bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:33 [async_llm.py:261] Added request cmpl-25ebb0657c6046e3a9a2152f3ea327bd-0.
INFO 03-01 23:41:34 [logger.py:42] Received request cmpl-96d7cd6e30b44a44bf6c8b341eca6a98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:34 [async_llm.py:261] Added request cmpl-96d7cd6e30b44a44bf6c8b341eca6a98-0.
INFO 03-01 23:41:35 [logger.py:42] Received request cmpl-f1a0b17a9f6c4d3e99564ae3f9f4ecd9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:35 [async_llm.py:261] Added request cmpl-f1a0b17a9f6c4d3e99564ae3f9f4ecd9-0.
INFO 03-01 23:41:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:41:36 [logger.py:42] Received request cmpl-7faf9606342a4a7f9c76edb5b3f34a8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:36 [async_llm.py:261] Added request cmpl-7faf9606342a4a7f9c76edb5b3f34a8f-0.
INFO 03-01 23:41:37 [logger.py:42] Received request cmpl-381fd09b96d94b3e8c03eb93761afab3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:37 [async_llm.py:261] Added request cmpl-381fd09b96d94b3e8c03eb93761afab3-0.
INFO 03-01 23:41:38 [logger.py:42] Received request cmpl-3a354ad7c58049e399a84eba744089fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:38 [async_llm.py:261] Added request cmpl-3a354ad7c58049e399a84eba744089fb-0.
INFO 03-01 23:41:39 [logger.py:42] Received request cmpl-80e10a5c90b6402198977960753ddd86-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:39 [async_llm.py:261] Added request cmpl-80e10a5c90b6402198977960753ddd86-0.
INFO 03-01 23:41:40 [logger.py:42] Received request cmpl-dc8f082e958e4268894736c4d57985fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:40 [async_llm.py:261] Added request cmpl-dc8f082e958e4268894736c4d57985fc-0.
INFO 03-01 23:41:41 [logger.py:42] Received request cmpl-93edd7c5589f45269d0a8782e9c0a3fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:41 [async_llm.py:261] Added request cmpl-93edd7c5589f45269d0a8782e9c0a3fb-0.
INFO 03-01 23:41:42 [logger.py:42] Received request cmpl-f630f83393a04dcab5f1ec6039db6719-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:42 [async_llm.py:261] Added request cmpl-f630f83393a04dcab5f1ec6039db6719-0.
INFO 03-01 23:41:43 [logger.py:42] Received request cmpl-572631c3e96949479f6f646f08e98c66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:43 [async_llm.py:261] Added request cmpl-572631c3e96949479f6f646f08e98c66-0.
INFO 03-01 23:41:44 [logger.py:42] Received request cmpl-2eb6a6c0283a4883a8364d29de61384e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:44 [async_llm.py:261] Added request cmpl-2eb6a6c0283a4883a8364d29de61384e-0.
INFO 03-01 23:41:46 [logger.py:42] Received request cmpl-c0c1f8bd2a91432a9c771b2b4cf19bf9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:46 [async_llm.py:261] Added request cmpl-c0c1f8bd2a91432a9c771b2b4cf19bf9-0.
INFO 03-01 23:41:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:41:47 [logger.py:42] Received request cmpl-65ed7ad530cd4d43b33fa416f9fa9a8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:47 [async_llm.py:261] Added request cmpl-65ed7ad530cd4d43b33fa416f9fa9a8f-0.
INFO 03-01 23:41:48 [logger.py:42] Received request cmpl-bb70d8e14ede4deeadf0d17f5ab41325-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:48 [async_llm.py:261] Added request cmpl-bb70d8e14ede4deeadf0d17f5ab41325-0.
INFO 03-01 23:41:49 [logger.py:42] Received request cmpl-c38835c1a1564681914dcbebac63a62d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:49 [async_llm.py:261] Added request cmpl-c38835c1a1564681914dcbebac63a62d-0.
INFO 03-01 23:41:50 [logger.py:42] Received request cmpl-695e075a0d2045cdab06ad617758bf9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:50 [async_llm.py:261] Added request cmpl-695e075a0d2045cdab06ad617758bf9b-0.
INFO 03-01 23:41:51 [logger.py:42] Received request cmpl-de5082225e134577bfc087d1c8eaee26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:51 [async_llm.py:261] Added request cmpl-de5082225e134577bfc087d1c8eaee26-0.
INFO 03-01 23:41:52 [logger.py:42] Received request cmpl-12e24a4426b14a06be12600f3f39cdd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:52 [async_llm.py:261] Added request cmpl-12e24a4426b14a06be12600f3f39cdd5-0.
INFO 03-01 23:41:53 [logger.py:42] Received request cmpl-e334f4a89c904d60b4f7f2390e952ecc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:53 [async_llm.py:261] Added request cmpl-e334f4a89c904d60b4f7f2390e952ecc-0.
INFO 03-01 23:41:54 [logger.py:42] Received request cmpl-89227fba604f4ae1af71ab8fd6652500-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:54 [async_llm.py:261] Added request cmpl-89227fba604f4ae1af71ab8fd6652500-0.
INFO 03-01 23:41:55 [logger.py:42] Received request cmpl-949110853eb14683856237e4881969f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:55 [async_llm.py:261] Added request cmpl-949110853eb14683856237e4881969f1-0.
INFO 03-01 23:41:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:41:56 [logger.py:42] Received request cmpl-4e2f53ee45994d30ad5bde8420114182-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:56 [async_llm.py:261] Added request cmpl-4e2f53ee45994d30ad5bde8420114182-0.
INFO 03-01 23:41:57 [logger.py:42] Received request cmpl-39a7842f00b84d959fb1cab3cba2341a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:57 [async_llm.py:261] Added request cmpl-39a7842f00b84d959fb1cab3cba2341a-0.
INFO 03-01 23:41:58 [logger.py:42] Received request cmpl-908a9c64b38148de9ef4a1aed63afa94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:58 [async_llm.py:261] Added request cmpl-908a9c64b38148de9ef4a1aed63afa94-0.
INFO 03-01 23:42:00 [logger.py:42] Received request cmpl-0a3f086e053148648caf37fd417302e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:00 [async_llm.py:261] Added request cmpl-0a3f086e053148648caf37fd417302e6-0.
INFO 03-01 23:42:01 [logger.py:42] Received request cmpl-189308b2850f4eb3b30184d6b01ccb2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:01 [async_llm.py:261] Added request cmpl-189308b2850f4eb3b30184d6b01ccb2b-0.
INFO 03-01 23:42:02 [logger.py:42] Received request cmpl-0e3cf257f20d453c92f808aef2bfec7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:02 [async_llm.py:261] Added request cmpl-0e3cf257f20d453c92f808aef2bfec7c-0.
INFO 03-01 23:42:03 [logger.py:42] Received request cmpl-3ccef95fce0e4865b06cbf86deabe477-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:03 [async_llm.py:261] Added request cmpl-3ccef95fce0e4865b06cbf86deabe477-0.
INFO 03-01 23:42:04 [logger.py:42] Received request cmpl-9863c2b6bd434f088cb03baf8c39606b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:04 [async_llm.py:261] Added request cmpl-9863c2b6bd434f088cb03baf8c39606b-0.
INFO 03-01 23:42:05 [logger.py:42] Received request cmpl-312e8f3d78fd443a94ca5ccd72432fe6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:05 [async_llm.py:261] Added request cmpl-312e8f3d78fd443a94ca5ccd72432fe6-0.
INFO 03-01 23:42:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:42:06 [logger.py:42] Received request cmpl-e974ff566be44e57991fbb643f48a4b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:06 [async_llm.py:261] Added request cmpl-e974ff566be44e57991fbb643f48a4b9-0.
INFO 03-01 23:42:07 [logger.py:42] Received request cmpl-496aba8438b34c1388f4a11522685187-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:07 [async_llm.py:261] Added request cmpl-496aba8438b34c1388f4a11522685187-0.
INFO 03-01 23:42:08 [logger.py:42] Received request cmpl-31ab99561f1845948de0b2dea6e2ee7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:08 [async_llm.py:261] Added request cmpl-31ab99561f1845948de0b2dea6e2ee7c-0.
INFO 03-01 23:42:09 [logger.py:42] Received request cmpl-035598cee7d04dbd90af5f7446d240dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:09 [async_llm.py:261] Added request cmpl-035598cee7d04dbd90af5f7446d240dd-0.
INFO 03-01 23:42:10 [logger.py:42] Received request cmpl-1892579b1bbc45369629d6c8f1b0a19b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:10 [async_llm.py:261] Added request cmpl-1892579b1bbc45369629d6c8f1b0a19b-0.
INFO 03-01 23:42:11 [logger.py:42] Received request cmpl-b497ff6960e245ff9dc4598acae466b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:11 [async_llm.py:261] Added request cmpl-b497ff6960e245ff9dc4598acae466b5-0.
INFO 03-01 23:42:13 [logger.py:42] Received request cmpl-3e238ce8ae6a49c1828f7b86dbfe42ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:13 [async_llm.py:261] Added request cmpl-3e238ce8ae6a49c1828f7b86dbfe42ec-0.
INFO 03-01 23:42:14 [logger.py:42] Received request cmpl-0566dbf6856a40baaad0d3d98d2c4071-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:14 [async_llm.py:261] Added request cmpl-0566dbf6856a40baaad0d3d98d2c4071-0.
INFO 03-01 23:42:15 [logger.py:42] Received request cmpl-ebeea4be97f04feba81e885f1fee89cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:15 [async_llm.py:261] Added request cmpl-ebeea4be97f04feba81e885f1fee89cc-0.
INFO 03-01 23:42:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:42:16 [logger.py:42] Received request cmpl-2126623bce72416d849495c7e71bbd07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:16 [async_llm.py:261] Added request cmpl-2126623bce72416d849495c7e71bbd07-0.
INFO 03-01 23:42:17 [logger.py:42] Received request cmpl-49241b08769d419b91533a79501c7184-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:17 [async_llm.py:261] Added request cmpl-49241b08769d419b91533a79501c7184-0.
INFO 03-01 23:42:18 [logger.py:42] Received request cmpl-cb621afc3ebc408a90288a833706882f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:18 [async_llm.py:261] Added request cmpl-cb621afc3ebc408a90288a833706882f-0.
INFO 03-01 23:42:19 [logger.py:42] Received request cmpl-014c45f30f36464aa96e50791a328c42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:19 [async_llm.py:261] Added request cmpl-014c45f30f36464aa96e50791a328c42-0.
INFO 03-01 23:42:20 [logger.py:42] Received request cmpl-8c96996bc8b147e788d7ffcce440f4c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:20 [async_llm.py:261] Added request cmpl-8c96996bc8b147e788d7ffcce440f4c8-0.
INFO 03-01 23:42:21 [logger.py:42] Received request cmpl-45bdd3dc68a0443b96b133ff00416bb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:21 [async_llm.py:261] Added request cmpl-45bdd3dc68a0443b96b133ff00416bb6-0.
INFO 03-01 23:42:22 [logger.py:42] Received request cmpl-5b8eed9c1e024233a13f38d85d26349e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:22 [async_llm.py:261] Added request cmpl-5b8eed9c1e024233a13f38d85d26349e-0.
INFO 03-01 23:42:23 [logger.py:42] Received request cmpl-4cd0e264cdd34018b53661414cd2576a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:23 [async_llm.py:261] Added request cmpl-4cd0e264cdd34018b53661414cd2576a-0.
INFO 03-01 23:42:24 [logger.py:42] Received request cmpl-d18c839b75864936b52108d73c2b6cb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:24 [async_llm.py:261] Added request cmpl-d18c839b75864936b52108d73c2b6cb9-0.
INFO 03-01 23:42:26 [logger.py:42] Received request cmpl-a50be1cff7c54c1a9e398b2dde95818e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:26 [async_llm.py:261] Added request cmpl-a50be1cff7c54c1a9e398b2dde95818e-0.
INFO 03-01 23:42:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:42:27 [logger.py:42] Received request cmpl-184e2b1d43534381924cdf1fc75ef9cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:27 [async_llm.py:261] Added request cmpl-184e2b1d43534381924cdf1fc75ef9cb-0.
INFO 03-01 23:42:28 [logger.py:42] Received request cmpl-1a9420e068f04c289c5352217a3e08f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:28 [async_llm.py:261] Added request cmpl-1a9420e068f04c289c5352217a3e08f2-0.
INFO 03-01 23:42:29 [logger.py:42] Received request cmpl-f51804a44c574f43bcde91ea53b1c2de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:29 [async_llm.py:261] Added request cmpl-f51804a44c574f43bcde91ea53b1c2de-0.
INFO 03-01 23:42:30 [logger.py:42] Received request cmpl-7e70d57072974998a903ecaef0a3ae87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:30 [async_llm.py:261] Added request cmpl-7e70d57072974998a903ecaef0a3ae87-0.
INFO 03-01 23:42:31 [logger.py:42] Received request cmpl-1bfd930c36064045868bd420b3e394f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:31 [async_llm.py:261] Added request cmpl-1bfd930c36064045868bd420b3e394f3-0.
INFO 03-01 23:42:32 [logger.py:42] Received request cmpl-073c6ad759064617ac80fe2c4a0069df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:32 [async_llm.py:261] Added request cmpl-073c6ad759064617ac80fe2c4a0069df-0.
INFO 03-01 23:42:33 [logger.py:42] Received request cmpl-02aeae5478e94b72a05493915c58f39e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:33 [async_llm.py:261] Added request cmpl-02aeae5478e94b72a05493915c58f39e-0.
INFO 03-01 23:42:34 [logger.py:42] Received request cmpl-0e20e64f7e2446b2be89574da4a57833-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:34 [async_llm.py:261] Added request cmpl-0e20e64f7e2446b2be89574da4a57833-0.
INFO 03-01 23:42:35 [logger.py:42] Received request cmpl-5e8320276ff64eccb592d00d0d999830-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:35 [async_llm.py:261] Added request cmpl-5e8320276ff64eccb592d00d0d999830-0.
INFO 03-01 23:42:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:42:36 [logger.py:42] Received request cmpl-ae2fe13bf24a4fd7b4f6b085e4a6348a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:36 [async_llm.py:261] Added request cmpl-ae2fe13bf24a4fd7b4f6b085e4a6348a-0.
INFO 03-01 23:42:37 [logger.py:42] Received request cmpl-4da8c91268494f37aaa3590107c1f2e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:37 [async_llm.py:261] Added request cmpl-4da8c91268494f37aaa3590107c1f2e2-0.
INFO 03-01 23:42:39 [logger.py:42] Received request cmpl-a856de75144b48e986054da077facd5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:39 [async_llm.py:261] Added request cmpl-a856de75144b48e986054da077facd5d-0.
INFO 03-01 23:42:40 [logger.py:42] Received request cmpl-ea50ef1dcb4d49c783aca9946062712c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:40 [async_llm.py:261] Added request cmpl-ea50ef1dcb4d49c783aca9946062712c-0.
INFO 03-01 23:42:41 [logger.py:42] Received request cmpl-5d8791853ab8488282700024dfe58561-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:41 [async_llm.py:261] Added request cmpl-5d8791853ab8488282700024dfe58561-0.
INFO 03-01 23:42:42 [logger.py:42] Received request cmpl-b1672ddfc9574052a747028e0d409776-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:42 [async_llm.py:261] Added request cmpl-b1672ddfc9574052a747028e0d409776-0.
INFO 03-01 23:42:43 [logger.py:42] Received request cmpl-e86eb18f80a24b3fb4049e20a03ba4cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:43 [async_llm.py:261] Added request cmpl-e86eb18f80a24b3fb4049e20a03ba4cf-0.
INFO 03-01 23:42:44 [logger.py:42] Received request cmpl-4594e9ff1283408bb4a46d668d00a14c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:44 [async_llm.py:261] Added request cmpl-4594e9ff1283408bb4a46d668d00a14c-0.
INFO 03-01 23:42:45 [logger.py:42] Received request cmpl-a9142acfaa6a4d90abf196727e378ea8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:45 [async_llm.py:261] Added request cmpl-a9142acfaa6a4d90abf196727e378ea8-0.
INFO 03-01 23:42:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:42:46 [logger.py:42] Received request cmpl-75d26686f8524c5b801dad6fb4d85db3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:46 [async_llm.py:261] Added request cmpl-75d26686f8524c5b801dad6fb4d85db3-0.
INFO 03-01 23:42:47 [logger.py:42] Received request cmpl-ea994f11369d4addbf0cb8f43d8c2d2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:47 [async_llm.py:261] Added request cmpl-ea994f11369d4addbf0cb8f43d8c2d2b-0.
INFO 03-01 23:42:48 [logger.py:42] Received request cmpl-7589a0de3268450a9c00063f461689c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:48 [async_llm.py:261] Added request cmpl-7589a0de3268450a9c00063f461689c5-0.
INFO 03-01 23:42:49 [logger.py:42] Received request cmpl-9e6347db1c5a497abadf33118553c0bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:49 [async_llm.py:261] Added request cmpl-9e6347db1c5a497abadf33118553c0bc-0.
INFO 03-01 23:42:50 [logger.py:42] Received request cmpl-e69b056a81db47598f54db4f43bf2dce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:50 [async_llm.py:261] Added request cmpl-e69b056a81db47598f54db4f43bf2dce-0.
INFO 03-01 23:42:52 [logger.py:42] Received request cmpl-96cbcf59956c47c28f3a4a6279d9d211-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:52 [async_llm.py:261] Added request cmpl-96cbcf59956c47c28f3a4a6279d9d211-0.
INFO 03-01 23:42:53 [logger.py:42] Received request cmpl-b814e7d755834ef388b8acb656b579cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:53 [async_llm.py:261] Added request cmpl-b814e7d755834ef388b8acb656b579cd-0.
INFO 03-01 23:42:54 [logger.py:42] Received request cmpl-925a6764cd9f4cb9878cdb486644f2e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:54 [async_llm.py:261] Added request cmpl-925a6764cd9f4cb9878cdb486644f2e6-0.
INFO 03-01 23:42:55 [logger.py:42] Received request cmpl-96a3e123e6274b8388cdbf0dd1ec8dfc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:55 [async_llm.py:261] Added request cmpl-96a3e123e6274b8388cdbf0dd1ec8dfc-0.
INFO 03-01 23:42:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:42:56 [logger.py:42] Received request cmpl-bd4bf23ea9214938bf0522a8a2296c6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:56 [async_llm.py:261] Added request cmpl-bd4bf23ea9214938bf0522a8a2296c6d-0.
INFO 03-01 23:42:57 [logger.py:42] Received request cmpl-70f4edf7a0a3429ca6decd5ea84a2bf2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:57 [async_llm.py:261] Added request cmpl-70f4edf7a0a3429ca6decd5ea84a2bf2-0.
INFO 03-01 23:42:58 [logger.py:42] Received request cmpl-8bb9d8aeebfa4314b3a647115b34f52e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:58 [async_llm.py:261] Added request cmpl-8bb9d8aeebfa4314b3a647115b34f52e-0.
INFO 03-01 23:42:59 [logger.py:42] Received request cmpl-58f5211edd6d4e04b500772c31fc6da0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:59 [async_llm.py:261] Added request cmpl-58f5211edd6d4e04b500772c31fc6da0-0.
INFO 03-01 23:43:00 [logger.py:42] Received request cmpl-da3dbbf0853f4c30864c37869403621a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:00 [async_llm.py:261] Added request cmpl-da3dbbf0853f4c30864c37869403621a-0.
INFO 03-01 23:43:01 [logger.py:42] Received request cmpl-d5ee01afc3dc45fcadf4df34559e4ae1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:01 [async_llm.py:261] Added request cmpl-d5ee01afc3dc45fcadf4df34559e4ae1-0.
INFO 03-01 23:43:02 [logger.py:42] Received request cmpl-353c785ac24a448d867b9c5616f18183-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:02 [async_llm.py:261] Added request cmpl-353c785ac24a448d867b9c5616f18183-0.
INFO 03-01 23:43:03 [logger.py:42] Received request cmpl-582603f708484df493a22cc34e8e7845-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:03 [async_llm.py:261] Added request cmpl-582603f708484df493a22cc34e8e7845-0.
INFO 03-01 23:43:05 [logger.py:42] Received request cmpl-a109d862e9a1486eb3d6880f205b741e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:05 [async_llm.py:261] Added request cmpl-a109d862e9a1486eb3d6880f205b741e-0.
INFO 03-01 23:43:06 [logger.py:42] Received request cmpl-8e96f2103c56481ebcc516423714ed3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:06 [async_llm.py:261] Added request cmpl-8e96f2103c56481ebcc516423714ed3b-0.
INFO 03-01 23:43:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-01 23:43:07 [logger.py:42] Received request cmpl-a418ce2bb0e44ee1a0bd1cc7640a91d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:07 [async_llm.py:261] Added request cmpl-a418ce2bb0e44ee1a0bd1cc7640a91d9-0.
INFO 03-01 23:43:08 [logger.py:42] Received request cmpl-8c9cdeeb84f5464fb3f4e29ee858eacd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:08 [async_llm.py:261] Added request cmpl-8c9cdeeb84f5464fb3f4e29ee858eacd-0.
INFO 03-01 23:43:09 [logger.py:42] Received request cmpl-543e2a4754f44f5192042dd89519458d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:09 [async_llm.py:261] Added request cmpl-543e2a4754f44f5192042dd89519458d-0.
INFO 03-01 23:43:10 [logger.py:42] Received request cmpl-9233d399622d44aebf24b4ac84151eb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:10 [async_llm.py:261] Added request cmpl-9233d399622d44aebf24b4ac84151eb8-0.
INFO 03-01 23:43:11 [logger.py:42] Received request cmpl-8bfdb82c2c60450fa849cb1827a7342f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:11 [async_llm.py:261] Added request cmpl-8bfdb82c2c60450fa849cb1827a7342f-0.
INFO 03-01 23:43:12 [logger.py:42] Received request cmpl-0f4edf4cc254422997e559f5a389736e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:12 [async_llm.py:261] Added request cmpl-0f4edf4cc254422997e559f5a389736e-0.
INFO 03-01 23:43:13 [logger.py:42] Received request cmpl-76e50977dd6c47c4a1d9fdcbf056677e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:13 [async_llm.py:261] Added request cmpl-76e50977dd6c47c4a1d9fdcbf056677e-0.
INFO 03-01 23:43:14 [logger.py:42] Received request cmpl-1a61bcf7b3e04673bce79eb6c06b7b9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:14 [async_llm.py:261] Added request cmpl-1a61bcf7b3e04673bce79eb6c06b7b9e-0.
INFO 03-01 23:43:15 [logger.py:42] Received request cmpl-1b132a40784b41e68627af6ae1387357-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:15 [async_llm.py:261] Added request cmpl-1b132a40784b41e68627af6ae1387357-0.
INFO 03-01 23:43:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:43:16 [logger.py:42] Received request cmpl-f2818a06f1dc4226b9a740dbf493b319-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:17 [async_llm.py:261] Added request cmpl-f2818a06f1dc4226b9a740dbf493b319-0.
INFO 03-01 23:43:18 [logger.py:42] Received request cmpl-93132ab474634703a3911795a6cb3da6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:18 [async_llm.py:261] Added request cmpl-93132ab474634703a3911795a6cb3da6-0.
INFO 03-01 23:43:19 [logger.py:42] Received request cmpl-caf2ea08e680481f8aac679363ac10ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:19 [async_llm.py:261] Added request cmpl-caf2ea08e680481f8aac679363ac10ac-0.
INFO 03-01 23:43:20 [logger.py:42] Received request cmpl-38546449d93e4a318e2b40cd9acd92da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:20 [async_llm.py:261] Added request cmpl-38546449d93e4a318e2b40cd9acd92da-0.
INFO 03-01 23:43:21 [logger.py:42] Received request cmpl-476edd0ce27a44cf9cef2838a2811fae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:21 [async_llm.py:261] Added request cmpl-476edd0ce27a44cf9cef2838a2811fae-0.
INFO 03-01 23:43:22 [logger.py:42] Received request cmpl-0ccf12dc3a2544e68e9f801cd43a9dca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:22 [async_llm.py:261] Added request cmpl-0ccf12dc3a2544e68e9f801cd43a9dca-0.
INFO 03-01 23:43:23 [logger.py:42] Received request cmpl-b65675c250da46a8b9698f0de73f5bc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:23 [async_llm.py:261] Added request cmpl-b65675c250da46a8b9698f0de73f5bc9-0.
INFO 03-01 23:43:24 [logger.py:42] Received request cmpl-9735ac4fd7424bd3a960beb1448140a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:24 [async_llm.py:261] Added request cmpl-9735ac4fd7424bd3a960beb1448140a2-0.
INFO 03-01 23:43:25 [logger.py:42] Received request cmpl-6c70a17994a747f6a9568445d0a94d8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:25 [async_llm.py:261] Added request cmpl-6c70a17994a747f6a9568445d0a94d8c-0.
INFO 03-01 23:43:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:43:26 [logger.py:42] Received request cmpl-e9e75a49aa7441719640a3c34903825c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:26 [async_llm.py:261] Added request cmpl-e9e75a49aa7441719640a3c34903825c-0.
INFO 03-01 23:43:27 [logger.py:42] Received request cmpl-6c4e848678d143b98f2fd4b50987e983-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:27 [async_llm.py:261] Added request cmpl-6c4e848678d143b98f2fd4b50987e983-0.
INFO 03-01 23:43:28 [logger.py:42] Received request cmpl-4a7d559879df452d982f70b38c3ffef6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:28 [async_llm.py:261] Added request cmpl-4a7d559879df452d982f70b38c3ffef6-0.
INFO 03-01 23:43:30 [logger.py:42] Received request cmpl-ea7b637783d04631b065b60a0a7e2317-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:30 [async_llm.py:261] Added request cmpl-ea7b637783d04631b065b60a0a7e2317-0.
INFO 03-01 23:43:31 [logger.py:42] Received request cmpl-f9dae915e5c043c0a5304cc9160dce9a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:31 [async_llm.py:261] Added request cmpl-f9dae915e5c043c0a5304cc9160dce9a-0.
INFO 03-01 23:43:32 [logger.py:42] Received request cmpl-4828acf6012c4033ac395746f58cd106-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:32 [async_llm.py:261] Added request cmpl-4828acf6012c4033ac395746f58cd106-0.
INFO 03-01 23:43:33 [logger.py:42] Received request cmpl-fb134bd50945483a8c0307de5ada0092-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:33 [async_llm.py:261] Added request cmpl-fb134bd50945483a8c0307de5ada0092-0.
INFO 03-01 23:43:34 [logger.py:42] Received request cmpl-63ef39590cb14a3b994ddc787084fbe7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:34 [async_llm.py:261] Added request cmpl-63ef39590cb14a3b994ddc787084fbe7-0.
INFO 03-01 23:43:35 [logger.py:42] Received request cmpl-5c3af2953b0c4ddca14eadd46db9567d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:35 [async_llm.py:261] Added request cmpl-5c3af2953b0c4ddca14eadd46db9567d-0.
INFO 03-01 23:43:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:43:36 [logger.py:42] Received request cmpl-67c1d48c5fc941abb841fab51cc9993e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:36 [async_llm.py:261] Added request cmpl-67c1d48c5fc941abb841fab51cc9993e-0.
INFO 03-01 23:43:37 [logger.py:42] Received request cmpl-5864212fc57943898d4ba3eef0440950-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:37 [async_llm.py:261] Added request cmpl-5864212fc57943898d4ba3eef0440950-0.
INFO 03-01 23:43:38 [logger.py:42] Received request cmpl-8b17dee2d77d45dbb856bfb7de155dc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:38 [async_llm.py:261] Added request cmpl-8b17dee2d77d45dbb856bfb7de155dc3-0.
INFO 03-01 23:43:39 [logger.py:42] Received request cmpl-bdd8844cd67a464a90d695fde8c6f2ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:39 [async_llm.py:261] Added request cmpl-bdd8844cd67a464a90d695fde8c6f2ff-0.
INFO 03-01 23:43:40 [logger.py:42] Received request cmpl-690a461b0650405da8d14ec968ccf3f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:40 [async_llm.py:261] Added request cmpl-690a461b0650405da8d14ec968ccf3f3-0.
INFO 03-01 23:43:41 [logger.py:42] Received request cmpl-fc179304505f441299499fe709467a01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:41 [async_llm.py:261] Added request cmpl-fc179304505f441299499fe709467a01-0.
INFO 03-01 23:43:43 [logger.py:42] Received request cmpl-cc16f861bc2f43e28c097be5949065cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:43 [async_llm.py:261] Added request cmpl-cc16f861bc2f43e28c097be5949065cf-0.
INFO 03-01 23:43:44 [logger.py:42] Received request cmpl-06c0f92c5a6b4dab9d39d9e4c6de96d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:44 [async_llm.py:261] Added request cmpl-06c0f92c5a6b4dab9d39d9e4c6de96d2-0.
INFO 03-01 23:43:45 [logger.py:42] Received request cmpl-64fe5e43ebcb403194a174f490e797d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:45 [async_llm.py:261] Added request cmpl-64fe5e43ebcb403194a174f490e797d9-0.
INFO 03-01 23:43:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:43:46 [logger.py:42] Received request cmpl-71ae919032ff439ca26e7a3cc02bdb6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:46 [async_llm.py:261] Added request cmpl-71ae919032ff439ca26e7a3cc02bdb6d-0.
INFO 03-01 23:43:47 [logger.py:42] Received request cmpl-730d824255e14e78a8a7cd0a3c15adb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:47 [async_llm.py:261] Added request cmpl-730d824255e14e78a8a7cd0a3c15adb3-0.
INFO 03-01 23:43:48 [logger.py:42] Received request cmpl-76718994e9d84e4087e62e52fe6f6c1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:48 [async_llm.py:261] Added request cmpl-76718994e9d84e4087e62e52fe6f6c1c-0.
INFO 03-01 23:43:49 [logger.py:42] Received request cmpl-29d4cb8d04864ccea29369c8645e6e54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:49 [async_llm.py:261] Added request cmpl-29d4cb8d04864ccea29369c8645e6e54-0.
INFO 03-01 23:43:50 [logger.py:42] Received request cmpl-78d92101600242f7a98b230e68e2f27c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:50 [async_llm.py:261] Added request cmpl-78d92101600242f7a98b230e68e2f27c-0.
INFO 03-01 23:43:51 [logger.py:42] Received request cmpl-f8b7be4a9dec4aeeafd0557115b80349-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:51 [async_llm.py:261] Added request cmpl-f8b7be4a9dec4aeeafd0557115b80349-0.
INFO 03-01 23:43:52 [logger.py:42] Received request cmpl-b1700f3218eb4bb3ae0923ce30f7434f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:52 [async_llm.py:261] Added request cmpl-b1700f3218eb4bb3ae0923ce30f7434f-0.
INFO 03-01 23:43:53 [logger.py:42] Received request cmpl-d40b614124a240229e9c4677bf568b97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:53 [async_llm.py:261] Added request cmpl-d40b614124a240229e9c4677bf568b97-0.
INFO 03-01 23:43:54 [logger.py:42] Received request cmpl-f431010d6b544f8887a5deb6c0c04630-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:54 [async_llm.py:261] Added request cmpl-f431010d6b544f8887a5deb6c0c04630-0.
INFO 03-01 23:43:56 [logger.py:42] Received request cmpl-2b83b06207a044ea90113d46e0686e3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:56 [async_llm.py:261] Added request cmpl-2b83b06207a044ea90113d46e0686e3c-0.
INFO 03-01 23:43:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:43:57 [logger.py:42] Received request cmpl-6111fb1cd3fd4dfc809441c3000f95d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:57 [async_llm.py:261] Added request cmpl-6111fb1cd3fd4dfc809441c3000f95d4-0.
INFO 03-01 23:43:58 [logger.py:42] Received request cmpl-55c37a66e71045c48afe9eacc9929392-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:58 [async_llm.py:261] Added request cmpl-55c37a66e71045c48afe9eacc9929392-0.
INFO 03-01 23:43:59 [logger.py:42] Received request cmpl-b68b05ef06904c8a9d976cfd297df14a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:59 [async_llm.py:261] Added request cmpl-b68b05ef06904c8a9d976cfd297df14a-0.
INFO 03-01 23:44:00 [logger.py:42] Received request cmpl-1ca7b723567f444a8fe5cdca42ffcb65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:00 [async_llm.py:261] Added request cmpl-1ca7b723567f444a8fe5cdca42ffcb65-0.
INFO 03-01 23:44:01 [logger.py:42] Received request cmpl-d1e6ef54ae4745f9963200a01cfbf90a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:01 [async_llm.py:261] Added request cmpl-d1e6ef54ae4745f9963200a01cfbf90a-0.
INFO 03-01 23:44:02 [logger.py:42] Received request cmpl-ed5f493e71d44206bc821653ce20b747-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:02 [async_llm.py:261] Added request cmpl-ed5f493e71d44206bc821653ce20b747-0.
INFO 03-01 23:44:03 [logger.py:42] Received request cmpl-8aea632b9beb40d8851d6db869d2a113-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:03 [async_llm.py:261] Added request cmpl-8aea632b9beb40d8851d6db869d2a113-0.
INFO 03-01 23:44:04 [logger.py:42] Received request cmpl-c00ba7ce65ce49b5bd8056615c3f6d0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:04 [async_llm.py:261] Added request cmpl-c00ba7ce65ce49b5bd8056615c3f6d0c-0.
INFO 03-01 23:44:05 [logger.py:42] Received request cmpl-797df34aebb34b1abcb2cbfe2c127005-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:05 [async_llm.py:261] Added request cmpl-797df34aebb34b1abcb2cbfe2c127005-0.
INFO 03-01 23:44:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:44:06 [logger.py:42] Received request cmpl-84f635e23f5c48728f96e4e75fde97ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:06 [async_llm.py:261] Added request cmpl-84f635e23f5c48728f96e4e75fde97ca-0.
INFO 03-01 23:44:07 [logger.py:42] Received request cmpl-a64a6ce9c59e445ba6a984a824f7981e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:07 [async_llm.py:261] Added request cmpl-a64a6ce9c59e445ba6a984a824f7981e-0.
INFO 03-01 23:44:09 [logger.py:42] Received request cmpl-00efa221a95347aa8bb7decba0f75db4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:09 [async_llm.py:261] Added request cmpl-00efa221a95347aa8bb7decba0f75db4-0.
INFO 03-01 23:44:10 [logger.py:42] Received request cmpl-9b040547dd724002b13cad40fa710318-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:10 [async_llm.py:261] Added request cmpl-9b040547dd724002b13cad40fa710318-0.
INFO 03-01 23:44:11 [logger.py:42] Received request cmpl-c047f55780564e6b988f28417b7faca3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:11 [async_llm.py:261] Added request cmpl-c047f55780564e6b988f28417b7faca3-0.
INFO 03-01 23:44:12 [logger.py:42] Received request cmpl-dfe889f7697d437f833558b0c59e05f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:12 [async_llm.py:261] Added request cmpl-dfe889f7697d437f833558b0c59e05f8-0.
INFO 03-01 23:44:13 [logger.py:42] Received request cmpl-b094e10cbcd74414aa1b1dc36ca4167f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:13 [async_llm.py:261] Added request cmpl-b094e10cbcd74414aa1b1dc36ca4167f-0.
INFO 03-01 23:44:14 [logger.py:42] Received request cmpl-9764b624c01741e2bfc859cf66e560e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:14 [async_llm.py:261] Added request cmpl-9764b624c01741e2bfc859cf66e560e8-0.
INFO 03-01 23:44:15 [logger.py:42] Received request cmpl-151b5b3a42c24833ac5055f1a594ebd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:15 [async_llm.py:261] Added request cmpl-151b5b3a42c24833ac5055f1a594ebd3-0.
INFO 03-01 23:44:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:44:16 [logger.py:42] Received request cmpl-9afd0ea16ec34262b72ea6d36407ba66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:16 [async_llm.py:261] Added request cmpl-9afd0ea16ec34262b72ea6d36407ba66-0.
INFO 03-01 23:44:17 [logger.py:42] Received request cmpl-23fafddf7cee4e1f942bddb34b14ad3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:17 [async_llm.py:261] Added request cmpl-23fafddf7cee4e1f942bddb34b14ad3d-0.
INFO 03-01 23:44:18 [logger.py:42] Received request cmpl-b465306f2d42465da61e6a8b423566b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:18 [async_llm.py:261] Added request cmpl-b465306f2d42465da61e6a8b423566b2-0.
INFO 03-01 23:44:19 [logger.py:42] Received request cmpl-d675690513a944ac9699e5fb59bc75c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:19 [async_llm.py:261] Added request cmpl-d675690513a944ac9699e5fb59bc75c3-0.
INFO 03-01 23:44:20 [logger.py:42] Received request cmpl-37d67d05562d4f41971c2d3a69a7b225-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:20 [async_llm.py:261] Added request cmpl-37d67d05562d4f41971c2d3a69a7b225-0.
INFO 03-01 23:44:22 [logger.py:42] Received request cmpl-a948cf24690e459487f4119575015128-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:22 [async_llm.py:261] Added request cmpl-a948cf24690e459487f4119575015128-0.
INFO 03-01 23:44:23 [logger.py:42] Received request cmpl-5826a593a38345db8f10990900ea0ac8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:23 [async_llm.py:261] Added request cmpl-5826a593a38345db8f10990900ea0ac8-0.
INFO 03-01 23:44:24 [logger.py:42] Received request cmpl-6fa652475e2841e29312771e1c736244-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:24 [async_llm.py:261] Added request cmpl-6fa652475e2841e29312771e1c736244-0.
INFO 03-01 23:44:25 [logger.py:42] Received request cmpl-1fd83091cdf14200ab79e93ed8cd7f1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:25 [async_llm.py:261] Added request cmpl-1fd83091cdf14200ab79e93ed8cd7f1e-0.
INFO 03-01 23:44:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:44:26 [logger.py:42] Received request cmpl-1a304f7a9fe44cd1a3703ed1ab1aa31d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:26 [async_llm.py:261] Added request cmpl-1a304f7a9fe44cd1a3703ed1ab1aa31d-0.
INFO 03-01 23:44:27 [logger.py:42] Received request cmpl-59cd10b14136480e86128b45910b6536-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:27 [async_llm.py:261] Added request cmpl-59cd10b14136480e86128b45910b6536-0.
INFO 03-01 23:44:28 [logger.py:42] Received request cmpl-7f71fba56c2a45b4b6a6deba3d71c4fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:28 [async_llm.py:261] Added request cmpl-7f71fba56c2a45b4b6a6deba3d71c4fc-0.
INFO 03-01 23:44:29 [logger.py:42] Received request cmpl-5adc701be7e047dc99377593abf7a742-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:29 [async_llm.py:261] Added request cmpl-5adc701be7e047dc99377593abf7a742-0.
INFO 03-01 23:44:30 [logger.py:42] Received request cmpl-ad24f393a06b43b7828cb8e063ee3bb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:30 [async_llm.py:261] Added request cmpl-ad24f393a06b43b7828cb8e063ee3bb0-0.
INFO 03-01 23:44:31 [logger.py:42] Received request cmpl-fdfbac33571941f8968ad81d9d7ed8b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:31 [async_llm.py:261] Added request cmpl-fdfbac33571941f8968ad81d9d7ed8b3-0.
INFO 03-01 23:44:32 [logger.py:42] Received request cmpl-c975c2d2e5ff406b9eb497b92ba64f27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:32 [async_llm.py:261] Added request cmpl-c975c2d2e5ff406b9eb497b92ba64f27-0.
INFO 03-01 23:44:33 [logger.py:42] Received request cmpl-4d0f8e2b1f8a4dc48216cb4e463ee7c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:33 [async_llm.py:261] Added request cmpl-4d0f8e2b1f8a4dc48216cb4e463ee7c2-0.
INFO 03-01 23:44:35 [logger.py:42] Received request cmpl-5321d9e7d0224ac1a2ab0aacb160cdfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:35 [async_llm.py:261] Added request cmpl-5321d9e7d0224ac1a2ab0aacb160cdfa-0.
INFO 03-01 23:44:36 [logger.py:42] Received request cmpl-69eb584fdefe41f0a5e793a5cfe18b4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:36 [async_llm.py:261] Added request cmpl-69eb584fdefe41f0a5e793a5cfe18b4c-0.
INFO 03-01 23:44:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:44:37 [logger.py:42] Received request cmpl-641a702f2f47408b8cd3731e37be3ccc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:37 [async_llm.py:261] Added request cmpl-641a702f2f47408b8cd3731e37be3ccc-0.
INFO 03-01 23:44:38 [logger.py:42] Received request cmpl-0cc0e4daf2744b758abbb148e5e696a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:38 [async_llm.py:261] Added request cmpl-0cc0e4daf2744b758abbb148e5e696a3-0.
INFO 03-01 23:44:39 [logger.py:42] Received request cmpl-b2aa961888474dbaa14b680b12ce77a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:39 [async_llm.py:261] Added request cmpl-b2aa961888474dbaa14b680b12ce77a4-0.
INFO 03-01 23:44:40 [logger.py:42] Received request cmpl-fc6703ed915648f3b590fa7a9de11441-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:40 [async_llm.py:261] Added request cmpl-fc6703ed915648f3b590fa7a9de11441-0.
INFO 03-01 23:44:41 [logger.py:42] Received request cmpl-c67cf2e3f5d54794bd250153f9bd11ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:41 [async_llm.py:261] Added request cmpl-c67cf2e3f5d54794bd250153f9bd11ac-0.
INFO 03-01 23:44:42 [logger.py:42] Received request cmpl-46ef85ecc4874e048c822c37b4e3197f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:42 [async_llm.py:261] Added request cmpl-46ef85ecc4874e048c822c37b4e3197f-0.
INFO 03-01 23:44:43 [logger.py:42] Received request cmpl-cec6042945dd4bfebbedc6c9b7257aa9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:43 [async_llm.py:261] Added request cmpl-cec6042945dd4bfebbedc6c9b7257aa9-0.
INFO 03-01 23:44:44 [logger.py:42] Received request cmpl-078ef4f38cc842a59a17389e394c4c21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:44 [async_llm.py:261] Added request cmpl-078ef4f38cc842a59a17389e394c4c21-0.
INFO 03-01 23:44:45 [logger.py:42] Received request cmpl-16bee541ebb54a20892afb9a0f416603-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:45 [async_llm.py:261] Added request cmpl-16bee541ebb54a20892afb9a0f416603-0.
INFO 03-01 23:44:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:44:46 [logger.py:42] Received request cmpl-f8cfcf0b25844cc3a7355796f7a1db5c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:46 [async_llm.py:261] Added request cmpl-f8cfcf0b25844cc3a7355796f7a1db5c-0.
INFO 03-01 23:44:48 [logger.py:42] Received request cmpl-d80f4376e2444eb8a038173f74cecefd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:48 [async_llm.py:261] Added request cmpl-d80f4376e2444eb8a038173f74cecefd-0.
INFO 03-01 23:44:49 [logger.py:42] Received request cmpl-197bcd08624c42d1810fd98b36da4f85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:49 [async_llm.py:261] Added request cmpl-197bcd08624c42d1810fd98b36da4f85-0.
INFO 03-01 23:44:50 [logger.py:42] Received request cmpl-f02a1b6fabb743cdb85ea72b801924a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:50 [async_llm.py:261] Added request cmpl-f02a1b6fabb743cdb85ea72b801924a8-0.
INFO 03-01 23:44:51 [logger.py:42] Received request cmpl-e1ac2e9453ae428abc748b0342c89e03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:51 [async_llm.py:261] Added request cmpl-e1ac2e9453ae428abc748b0342c89e03-0.
INFO 03-01 23:44:52 [logger.py:42] Received request cmpl-c4e40f4667c34454b59d1c13d51cbc59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:52 [async_llm.py:261] Added request cmpl-c4e40f4667c34454b59d1c13d51cbc59-0.
INFO 03-01 23:44:53 [logger.py:42] Received request cmpl-fa47ca6abf3d4afcb657057a14d97e6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:53 [async_llm.py:261] Added request cmpl-fa47ca6abf3d4afcb657057a14d97e6b-0.
INFO 03-01 23:44:54 [logger.py:42] Received request cmpl-85e28c734c524cdf8df8096e6f548956-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:54 [async_llm.py:261] Added request cmpl-85e28c734c524cdf8df8096e6f548956-0.
INFO 03-01 23:44:55 [logger.py:42] Received request cmpl-85a7c3df2b804ab5a3bb9df885d7410d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:55 [async_llm.py:261] Added request cmpl-85a7c3df2b804ab5a3bb9df885d7410d-0.
INFO 03-01 23:44:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:44:56 [logger.py:42] Received request cmpl-b0abb959a7c6408682756e613695ae28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:56 [async_llm.py:261] Added request cmpl-b0abb959a7c6408682756e613695ae28-0.
INFO 03-01 23:44:57 [logger.py:42] Received request cmpl-68eb52659bb94aa9879541053ddbbfba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:57 [async_llm.py:261] Added request cmpl-68eb52659bb94aa9879541053ddbbfba-0.
INFO 03-01 23:44:58 [logger.py:42] Received request cmpl-bb2019625c494cc1a6d60f9a66f1e760-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:58 [async_llm.py:261] Added request cmpl-bb2019625c494cc1a6d60f9a66f1e760-0.
INFO 03-01 23:44:59 [logger.py:42] Received request cmpl-7c4e33e26d3641db9886702abe930a5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:59 [async_llm.py:261] Added request cmpl-7c4e33e26d3641db9886702abe930a5b-0.
INFO 03-01 23:45:01 [logger.py:42] Received request cmpl-1cf0e8e81cf24f8d89ff2814b2f2958a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:01 [async_llm.py:261] Added request cmpl-1cf0e8e81cf24f8d89ff2814b2f2958a-0.
INFO 03-01 23:45:02 [logger.py:42] Received request cmpl-cc4949b6255149ed840f687f1c894d62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:02 [async_llm.py:261] Added request cmpl-cc4949b6255149ed840f687f1c894d62-0.
INFO 03-01 23:45:03 [logger.py:42] Received request cmpl-eaa0eaf95e5a432a818ba0f8574cdb39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:03 [async_llm.py:261] Added request cmpl-eaa0eaf95e5a432a818ba0f8574cdb39-0.
INFO 03-01 23:45:04 [logger.py:42] Received request cmpl-1c25cbf26660413cb794b2f06271cbe4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:04 [async_llm.py:261] Added request cmpl-1c25cbf26660413cb794b2f06271cbe4-0.
INFO 03-01 23:45:05 [logger.py:42] Received request cmpl-aba4b2980d2f4307964a7c4238bdc212-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:05 [async_llm.py:261] Added request cmpl-aba4b2980d2f4307964a7c4238bdc212-0.
INFO 03-01 23:45:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:45:06 [logger.py:42] Received request cmpl-2c04b009c20b4cc79b0723099401aede-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:06 [async_llm.py:261] Added request cmpl-2c04b009c20b4cc79b0723099401aede-0.
INFO 03-01 23:45:07 [logger.py:42] Received request cmpl-b3a4f2939c574468a19fe14ac02351c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:07 [async_llm.py:261] Added request cmpl-b3a4f2939c574468a19fe14ac02351c8-0.
INFO 03-01 23:45:08 [logger.py:42] Received request cmpl-f9ab40d195c34b868fcf375f6b805216-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:08 [async_llm.py:261] Added request cmpl-f9ab40d195c34b868fcf375f6b805216-0.
INFO 03-01 23:45:09 [logger.py:42] Received request cmpl-899bfe667ab0483790c437db73ea3099-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:09 [async_llm.py:261] Added request cmpl-899bfe667ab0483790c437db73ea3099-0.
INFO 03-01 23:45:10 [logger.py:42] Received request cmpl-c44befc788a341b09ea8aa79b59e2cc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:10 [async_llm.py:261] Added request cmpl-c44befc788a341b09ea8aa79b59e2cc9-0.
INFO 03-01 23:45:11 [logger.py:42] Received request cmpl-2b110e4d688b44c18ef177da6a3cc564-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:11 [async_llm.py:261] Added request cmpl-2b110e4d688b44c18ef177da6a3cc564-0.
INFO 03-01 23:45:12 [logger.py:42] Received request cmpl-9eb23e2b87304b4db58eac6390938043-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:12 [async_llm.py:261] Added request cmpl-9eb23e2b87304b4db58eac6390938043-0.
INFO 03-01 23:45:14 [logger.py:42] Received request cmpl-ebbe01300d4e416c983947948b370826-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:14 [async_llm.py:261] Added request cmpl-ebbe01300d4e416c983947948b370826-0.
INFO 03-01 23:45:15 [logger.py:42] Received request cmpl-2a74b8fb23054e1f8759566aab0b19cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:15 [async_llm.py:261] Added request cmpl-2a74b8fb23054e1f8759566aab0b19cd-0.
INFO 03-01 23:45:16 [logger.py:42] Received request cmpl-3a2e835eb71f47a9b3311e7f606c9696-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:16 [async_llm.py:261] Added request cmpl-3a2e835eb71f47a9b3311e7f606c9696-0.
INFO 03-01 23:45:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:45:17 [logger.py:42] Received request cmpl-f19ddf3b3d714fb1aabd97bfc478bffb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:17 [async_llm.py:261] Added request cmpl-f19ddf3b3d714fb1aabd97bfc478bffb-0.
INFO 03-01 23:45:18 [logger.py:42] Received request cmpl-0c1c9f045f0243709d998bfea3c73d79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:18 [async_llm.py:261] Added request cmpl-0c1c9f045f0243709d998bfea3c73d79-0.
INFO 03-01 23:45:19 [logger.py:42] Received request cmpl-39d4fb75544b483898b3c8b56e4508a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:19 [async_llm.py:261] Added request cmpl-39d4fb75544b483898b3c8b56e4508a1-0.
INFO 03-01 23:45:20 [logger.py:42] Received request cmpl-0072efe42a374e3e88e0002340439b51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:20 [async_llm.py:261] Added request cmpl-0072efe42a374e3e88e0002340439b51-0.
INFO 03-01 23:45:21 [logger.py:42] Received request cmpl-88e75d34fdce45ee9045e9e4cf249f5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:21 [async_llm.py:261] Added request cmpl-88e75d34fdce45ee9045e9e4cf249f5b-0.
INFO 03-01 23:45:22 [logger.py:42] Received request cmpl-0b8fca1e5426409caa7442b1512be847-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:22 [async_llm.py:261] Added request cmpl-0b8fca1e5426409caa7442b1512be847-0.
INFO 03-01 23:45:23 [logger.py:42] Received request cmpl-9b2278e453af42c2a4d58eb0dca28046-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:23 [async_llm.py:261] Added request cmpl-9b2278e453af42c2a4d58eb0dca28046-0.
INFO 03-01 23:45:24 [logger.py:42] Received request cmpl-2e5b4bf0e7c9430e959be61f65a8ee1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:24 [async_llm.py:261] Added request cmpl-2e5b4bf0e7c9430e959be61f65a8ee1b-0.
INFO 03-01 23:45:25 [logger.py:42] Received request cmpl-311be8430f7d4091b6c90ec8ed8c9b53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:25 [async_llm.py:261] Added request cmpl-311be8430f7d4091b6c90ec8ed8c9b53-0.
INFO 03-01 23:45:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:45:27 [logger.py:42] Received request cmpl-634182e67807418b829a230be6f7a3f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:27 [async_llm.py:261] Added request cmpl-634182e67807418b829a230be6f7a3f6-0.
INFO 03-01 23:45:28 [logger.py:42] Received request cmpl-e4fc4827096d4d82a5a1b163746640d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:28 [async_llm.py:261] Added request cmpl-e4fc4827096d4d82a5a1b163746640d3-0.
INFO 03-01 23:45:29 [logger.py:42] Received request cmpl-636834ca1e344de5a9c8daff262b4f03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:29 [async_llm.py:261] Added request cmpl-636834ca1e344de5a9c8daff262b4f03-0.
INFO 03-01 23:45:30 [logger.py:42] Received request cmpl-1df031ba378544a1986ddc616ea6e56d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:30 [async_llm.py:261] Added request cmpl-1df031ba378544a1986ddc616ea6e56d-0.
INFO 03-01 23:45:31 [logger.py:42] Received request cmpl-623cb1094e63494ca9793cafc5eab4f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:31 [async_llm.py:261] Added request cmpl-623cb1094e63494ca9793cafc5eab4f9-0.
INFO 03-01 23:45:32 [logger.py:42] Received request cmpl-4ee118e6081449d6bb7da958ca0335b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:32 [async_llm.py:261] Added request cmpl-4ee118e6081449d6bb7da958ca0335b8-0.
INFO 03-01 23:45:33 [logger.py:42] Received request cmpl-c054276610254d529856e9d917201da8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:33 [async_llm.py:261] Added request cmpl-c054276610254d529856e9d917201da8-0.
INFO 03-01 23:45:34 [logger.py:42] Received request cmpl-213602e9312242dcaab2fbdbfcac2560-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:34 [async_llm.py:261] Added request cmpl-213602e9312242dcaab2fbdbfcac2560-0.
INFO 03-01 23:45:35 [logger.py:42] Received request cmpl-590b08c1202e43efa1c7ddf7db1f591f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:35 [async_llm.py:261] Added request cmpl-590b08c1202e43efa1c7ddf7db1f591f-0.
INFO 03-01 23:45:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:45:36 [logger.py:42] Received request cmpl-69bd3e8c182b4b83ba0b3b10d7edbf35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:36 [async_llm.py:261] Added request cmpl-69bd3e8c182b4b83ba0b3b10d7edbf35-0.
INFO 03-01 23:45:37 [logger.py:42] Received request cmpl-8f800d8b89d5414a9254a67d760c9b80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:37 [async_llm.py:261] Added request cmpl-8f800d8b89d5414a9254a67d760c9b80-0.
INFO 03-01 23:45:38 [logger.py:42] Received request cmpl-dd3fe5171c3e4692b1a868e39ef99d81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:38 [async_llm.py:261] Added request cmpl-dd3fe5171c3e4692b1a868e39ef99d81-0.
INFO 03-01 23:45:40 [logger.py:42] Received request cmpl-f1661dbb1c064527b6946b13b74744b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:40 [async_llm.py:261] Added request cmpl-f1661dbb1c064527b6946b13b74744b0-0.
INFO 03-01 23:45:41 [logger.py:42] Received request cmpl-7ed1d57e263a4380a69effd19d6e29ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:41 [async_llm.py:261] Added request cmpl-7ed1d57e263a4380a69effd19d6e29ba-0.
INFO 03-01 23:45:42 [logger.py:42] Received request cmpl-6b46eb8455f4487b988f6c8607344f42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:42 [async_llm.py:261] Added request cmpl-6b46eb8455f4487b988f6c8607344f42-0.
INFO 03-01 23:45:43 [logger.py:42] Received request cmpl-e3ed6d024e5648e882a6516678eeb133-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:43 [async_llm.py:261] Added request cmpl-e3ed6d024e5648e882a6516678eeb133-0.
INFO 03-01 23:45:44 [logger.py:42] Received request cmpl-d5bf9130a95f47868ac5dce6c3904625-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:44 [async_llm.py:261] Added request cmpl-d5bf9130a95f47868ac5dce6c3904625-0.
INFO 03-01 23:45:45 [logger.py:42] Received request cmpl-979adba67e7a45978efbd4919aa8a936-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:45 [async_llm.py:261] Added request cmpl-979adba67e7a45978efbd4919aa8a936-0.
INFO 03-01 23:45:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:45:46 [logger.py:42] Received request cmpl-9cb3de60984b4cd2ba6e33fc0795abe3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:46 [async_llm.py:261] Added request cmpl-9cb3de60984b4cd2ba6e33fc0795abe3-0.
INFO 03-01 23:45:47 [logger.py:42] Received request cmpl-735d803cea6d4baf9e0d471b2c454e80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:47 [async_llm.py:261] Added request cmpl-735d803cea6d4baf9e0d471b2c454e80-0.
INFO 03-01 23:45:48 [logger.py:42] Received request cmpl-d1a9fbad126a44b8a771341f8c99c1cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:48 [async_llm.py:261] Added request cmpl-d1a9fbad126a44b8a771341f8c99c1cd-0.
INFO 03-01 23:45:49 [logger.py:42] Received request cmpl-ba375bd00bf8465192e4f9556814f7fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:49 [async_llm.py:261] Added request cmpl-ba375bd00bf8465192e4f9556814f7fa-0.
INFO 03-01 23:45:50 [logger.py:42] Received request cmpl-46ad9e75b53b401eafb3e06c7cd7ba57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:50 [async_llm.py:261] Added request cmpl-46ad9e75b53b401eafb3e06c7cd7ba57-0.
INFO 03-01 23:45:51 [logger.py:42] Received request cmpl-211ec27bddb4497588c0cffb281b2cdd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:51 [async_llm.py:261] Added request cmpl-211ec27bddb4497588c0cffb281b2cdd-0.
INFO 03-01 23:45:53 [logger.py:42] Received request cmpl-55f331ee56eb4e5bb717da66968d7262-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:53 [async_llm.py:261] Added request cmpl-55f331ee56eb4e5bb717da66968d7262-0.
INFO 03-01 23:45:54 [logger.py:42] Received request cmpl-eb86fe5dfba14fcf8123c52caf691a91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:54 [async_llm.py:261] Added request cmpl-eb86fe5dfba14fcf8123c52caf691a91-0.
INFO 03-01 23:45:55 [logger.py:42] Received request cmpl-53d40008e93a4308a4dc13cfbe880220-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:55 [async_llm.py:261] Added request cmpl-53d40008e93a4308a4dc13cfbe880220-0.
INFO 03-01 23:45:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:45:56 [logger.py:42] Received request cmpl-f57a17e4b22941a29c43464c86979c8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:56 [async_llm.py:261] Added request cmpl-f57a17e4b22941a29c43464c86979c8b-0.
INFO 03-01 23:45:57 [logger.py:42] Received request cmpl-c1b39c1b21684e74867b3161cb28bacd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:57 [async_llm.py:261] Added request cmpl-c1b39c1b21684e74867b3161cb28bacd-0.
INFO 03-01 23:45:58 [logger.py:42] Received request cmpl-352eb0e327724dc1bfad73a5744bb075-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:58 [async_llm.py:261] Added request cmpl-352eb0e327724dc1bfad73a5744bb075-0.
INFO 03-01 23:45:59 [logger.py:42] Received request cmpl-b3586cbe7fbf4e5885cc5e7b69778626-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:59 [async_llm.py:261] Added request cmpl-b3586cbe7fbf4e5885cc5e7b69778626-0.
INFO 03-01 23:46:00 [logger.py:42] Received request cmpl-15f5e0f21ccb42ffb483eab11384b2f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:00 [async_llm.py:261] Added request cmpl-15f5e0f21ccb42ffb483eab11384b2f5-0.
INFO 03-01 23:46:01 [logger.py:42] Received request cmpl-4bc0c33e6c294569ba1c6159f8353411-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:01 [async_llm.py:261] Added request cmpl-4bc0c33e6c294569ba1c6159f8353411-0.
INFO 03-01 23:46:02 [logger.py:42] Received request cmpl-9c3f00c4c6af44c58ea9c5211b7f8c47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:02 [async_llm.py:261] Added request cmpl-9c3f00c4c6af44c58ea9c5211b7f8c47-0.
INFO 03-01 23:46:03 [logger.py:42] Received request cmpl-2d5691f99ffe419fb41e1eb5601002c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:03 [async_llm.py:261] Added request cmpl-2d5691f99ffe419fb41e1eb5601002c7-0.
INFO 03-01 23:46:04 [logger.py:42] Received request cmpl-763a14365d034fe9bf17db63a5c2562f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:04 [async_llm.py:261] Added request cmpl-763a14365d034fe9bf17db63a5c2562f-0.
INFO 03-01 23:46:06 [logger.py:42] Received request cmpl-41ff4af8a67146feade4029ac6aad51e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:06 [async_llm.py:261] Added request cmpl-41ff4af8a67146feade4029ac6aad51e-0.
INFO 03-01 23:46:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:46:07 [logger.py:42] Received request cmpl-5808a2fcf1f949b1a090101f6e37e505-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:07 [async_llm.py:261] Added request cmpl-5808a2fcf1f949b1a090101f6e37e505-0.
INFO 03-01 23:46:08 [logger.py:42] Received request cmpl-40d1704afd8843429b3160179dc0f5f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:08 [async_llm.py:261] Added request cmpl-40d1704afd8843429b3160179dc0f5f3-0.
INFO 03-01 23:46:09 [logger.py:42] Received request cmpl-0140d8e534b5474a9f43e88571b868d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:09 [async_llm.py:261] Added request cmpl-0140d8e534b5474a9f43e88571b868d9-0.
INFO 03-01 23:46:10 [logger.py:42] Received request cmpl-3620f7d137514111806e87f4ef3051a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:10 [async_llm.py:261] Added request cmpl-3620f7d137514111806e87f4ef3051a6-0.
INFO 03-01 23:46:11 [logger.py:42] Received request cmpl-d9106e1adfc34effa2d2d1802901d56d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:11 [async_llm.py:261] Added request cmpl-d9106e1adfc34effa2d2d1802901d56d-0.
INFO 03-01 23:46:12 [logger.py:42] Received request cmpl-8294cafc95364349b8b41f5245127eb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:12 [async_llm.py:261] Added request cmpl-8294cafc95364349b8b41f5245127eb5-0.
INFO 03-01 23:46:13 [logger.py:42] Received request cmpl-a907f66467a14e479020de676c1d5b39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:13 [async_llm.py:261] Added request cmpl-a907f66467a14e479020de676c1d5b39-0.
INFO 03-01 23:46:14 [logger.py:42] Received request cmpl-5ed6803cecee498d84a94e29a5541207-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:14 [async_llm.py:261] Added request cmpl-5ed6803cecee498d84a94e29a5541207-0.
INFO 03-01 23:46:15 [logger.py:42] Received request cmpl-b062891aec10466999c69e1531314ec8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:15 [async_llm.py:261] Added request cmpl-b062891aec10466999c69e1531314ec8-0.
INFO 03-01 23:46:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:46:16 [logger.py:42] Received request cmpl-6e6e448401284d32ab1884c77947900d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:16 [async_llm.py:261] Added request cmpl-6e6e448401284d32ab1884c77947900d-0.
INFO 03-01 23:46:17 [logger.py:42] Received request cmpl-462f036d0b24477ca3393ba668464ae1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:17 [async_llm.py:261] Added request cmpl-462f036d0b24477ca3393ba668464ae1-0.
INFO 03-01 23:46:19 [logger.py:42] Received request cmpl-377a3e3140a540f085192140422b6aa0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:19 [async_llm.py:261] Added request cmpl-377a3e3140a540f085192140422b6aa0-0.
INFO 03-01 23:46:20 [logger.py:42] Received request cmpl-c7423a1fea444c248d1562615d86241f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:20 [async_llm.py:261] Added request cmpl-c7423a1fea444c248d1562615d86241f-0.
INFO 03-01 23:46:21 [logger.py:42] Received request cmpl-3d4b0f81e1a043f3b351b2fd1f5dd3b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:21 [async_llm.py:261] Added request cmpl-3d4b0f81e1a043f3b351b2fd1f5dd3b3-0.
INFO 03-01 23:46:22 [logger.py:42] Received request cmpl-dcc4e15267644750b502c27d952db8c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:22 [async_llm.py:261] Added request cmpl-dcc4e15267644750b502c27d952db8c5-0.
INFO 03-01 23:46:23 [logger.py:42] Received request cmpl-17f65ee2d55b4d44914097c2b4963a19-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:23 [async_llm.py:261] Added request cmpl-17f65ee2d55b4d44914097c2b4963a19-0.
INFO 03-01 23:46:24 [logger.py:42] Received request cmpl-ba52a06b98fb4a358754e72ed662873e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:24 [async_llm.py:261] Added request cmpl-ba52a06b98fb4a358754e72ed662873e-0.
INFO 03-01 23:46:25 [logger.py:42] Received request cmpl-33e7e0eedb744effa7700ce464b5b66a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:25 [async_llm.py:261] Added request cmpl-33e7e0eedb744effa7700ce464b5b66a-0.
INFO 03-01 23:46:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:46:26 [logger.py:42] Received request cmpl-7c4e0e3074274fe495f660869bc0ae31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:26 [async_llm.py:261] Added request cmpl-7c4e0e3074274fe495f660869bc0ae31-0.
INFO 03-01 23:46:27 [logger.py:42] Received request cmpl-c276c6416149499e9bf59574bd736d4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:27 [async_llm.py:261] Added request cmpl-c276c6416149499e9bf59574bd736d4c-0.
INFO 03-01 23:46:28 [logger.py:42] Received request cmpl-371e0bdb815f47dfb76eac7c99029c14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:28 [async_llm.py:261] Added request cmpl-371e0bdb815f47dfb76eac7c99029c14-0.
INFO 03-01 23:46:29 [logger.py:42] Received request cmpl-a1c0c7998523481081228c7ddc64485d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:29 [async_llm.py:261] Added request cmpl-a1c0c7998523481081228c7ddc64485d-0.
INFO 03-01 23:46:30 [logger.py:42] Received request cmpl-20aa553b08d7468082fce8b7e916bd49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:30 [async_llm.py:261] Added request cmpl-20aa553b08d7468082fce8b7e916bd49-0.
INFO 03-01 23:46:32 [logger.py:42] Received request cmpl-d4c903053d554b2ba7b21ceeab26088f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:32 [async_llm.py:261] Added request cmpl-d4c903053d554b2ba7b21ceeab26088f-0.
INFO 03-01 23:46:33 [logger.py:42] Received request cmpl-527088795b7b4a878c63c40772c22379-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:33 [async_llm.py:261] Added request cmpl-527088795b7b4a878c63c40772c22379-0.
INFO 03-01 23:46:34 [logger.py:42] Received request cmpl-5e2892eea4a14166a1850df1ff0a3435-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:34 [async_llm.py:261] Added request cmpl-5e2892eea4a14166a1850df1ff0a3435-0.
INFO 03-01 23:46:35 [logger.py:42] Received request cmpl-19ae4b90764b46e1b68242ae4db88246-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:35 [async_llm.py:261] Added request cmpl-19ae4b90764b46e1b68242ae4db88246-0.
INFO 03-01 23:46:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:46:36 [logger.py:42] Received request cmpl-caa5e17ae44a4b079e850f1701fa6991-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:36 [async_llm.py:261] Added request cmpl-caa5e17ae44a4b079e850f1701fa6991-0.
INFO 03-01 23:46:37 [logger.py:42] Received request cmpl-987a4b27b3cf4d07ab4ac50660878337-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:37 [async_llm.py:261] Added request cmpl-987a4b27b3cf4d07ab4ac50660878337-0.
INFO 03-01 23:46:38 [logger.py:42] Received request cmpl-9b7cde58111f47f0a8376b732e2e7ad5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:38 [async_llm.py:261] Added request cmpl-9b7cde58111f47f0a8376b732e2e7ad5-0.
INFO 03-01 23:46:39 [logger.py:42] Received request cmpl-5b751bf5e0134c5cb9ff509c2600605f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:39 [async_llm.py:261] Added request cmpl-5b751bf5e0134c5cb9ff509c2600605f-0.
INFO 03-01 23:46:40 [logger.py:42] Received request cmpl-3e31583be7e94abeb417cfc6ae378d2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:40 [async_llm.py:261] Added request cmpl-3e31583be7e94abeb417cfc6ae378d2e-0.
INFO 03-01 23:46:41 [logger.py:42] Received request cmpl-7730751e103548d0aec83eaf8e904f90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:41 [async_llm.py:261] Added request cmpl-7730751e103548d0aec83eaf8e904f90-0.
INFO 03-01 23:46:42 [logger.py:42] Received request cmpl-3e83b90b4c3146c394bd9cae50e40897-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:42 [async_llm.py:261] Added request cmpl-3e83b90b4c3146c394bd9cae50e40897-0.
INFO 03-01 23:46:43 [logger.py:42] Received request cmpl-969f66aa9a0a46268b1f9ec9c6befb24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:43 [async_llm.py:261] Added request cmpl-969f66aa9a0a46268b1f9ec9c6befb24-0.
INFO 03-01 23:46:45 [logger.py:42] Received request cmpl-2f3c2825d0d1407882072581f6209754-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:45 [async_llm.py:261] Added request cmpl-2f3c2825d0d1407882072581f6209754-0.
INFO 03-01 23:46:46 [logger.py:42] Received request cmpl-3be2ffaae3cf4bc1a7b0b0f121f36213-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:46 [async_llm.py:261] Added request cmpl-3be2ffaae3cf4bc1a7b0b0f121f36213-0.
INFO 03-01 23:46:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-01 23:46:47 [logger.py:42] Received request cmpl-3f7c589d62ae45a7ba2066643ffd981e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:47 [async_llm.py:261] Added request cmpl-3f7c589d62ae45a7ba2066643ffd981e-0.
INFO 03-01 23:46:48 [logger.py:42] Received request cmpl-1829f184015140fda14bd1a5668809a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:48 [async_llm.py:261] Added request cmpl-1829f184015140fda14bd1a5668809a6-0.
INFO 03-01 23:46:49 [logger.py:42] Received request cmpl-f10dbaf2918d4e3981e40ed794138686-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:49 [async_llm.py:261] Added request cmpl-f10dbaf2918d4e3981e40ed794138686-0.
INFO 03-01 23:46:50 [logger.py:42] Received request cmpl-41077f8bbe7d4eef9c307aab1e89e8d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:50 [async_llm.py:261] Added request cmpl-41077f8bbe7d4eef9c307aab1e89e8d6-0.
INFO 03-01 23:46:51 [logger.py:42] Received request cmpl-d862ac71a5e54c1eb58ff742964ab701-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:51 [async_llm.py:261] Added request cmpl-d862ac71a5e54c1eb58ff742964ab701-0.
INFO 03-01 23:46:52 [logger.py:42] Received request cmpl-8922bef6cdaa42768beabbb62bc3f150-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:52 [async_llm.py:261] Added request cmpl-8922bef6cdaa42768beabbb62bc3f150-0.
INFO 03-01 23:46:53 [logger.py:42] Received request cmpl-4b85730f5cbe48c4abcfb0d298a316aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:53 [async_llm.py:261] Added request cmpl-4b85730f5cbe48c4abcfb0d298a316aa-0.
INFO 03-01 23:46:54 [logger.py:42] Received request cmpl-981393a334c54809af27aaadfcd7cf66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:54 [async_llm.py:261] Added request cmpl-981393a334c54809af27aaadfcd7cf66-0.
INFO 03-01 23:46:55 [logger.py:42] Received request cmpl-e4b7c6c0b3ee47f9a321926de442d9b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:55 [async_llm.py:261] Added request cmpl-e4b7c6c0b3ee47f9a321926de442d9b0-0.
INFO 03-01 23:46:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:46:56 [logger.py:42] Received request cmpl-04954d0157a04c49b60f148c10ed1793-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:56 [async_llm.py:261] Added request cmpl-04954d0157a04c49b60f148c10ed1793-0.
INFO 03-01 23:46:58 [logger.py:42] Received request cmpl-935e4442093547e087d419237359cbef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:58 [async_llm.py:261] Added request cmpl-935e4442093547e087d419237359cbef-0.
INFO 03-01 23:46:59 [logger.py:42] Received request cmpl-ececb57e51b24cdebc2b75306c0263cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:59 [async_llm.py:261] Added request cmpl-ececb57e51b24cdebc2b75306c0263cc-0.
INFO 03-01 23:47:00 [logger.py:42] Received request cmpl-5cd6235c64d24596aa5d8631976bf806-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:00 [async_llm.py:261] Added request cmpl-5cd6235c64d24596aa5d8631976bf806-0.
INFO 03-01 23:47:01 [logger.py:42] Received request cmpl-5bc93de8ca8f47f0bbf18f758d96d3ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:01 [async_llm.py:261] Added request cmpl-5bc93de8ca8f47f0bbf18f758d96d3ba-0.
INFO 03-01 23:47:02 [logger.py:42] Received request cmpl-7eb180bab6724bbaa335eb004d6b21cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:02 [async_llm.py:261] Added request cmpl-7eb180bab6724bbaa335eb004d6b21cb-0.
INFO 03-01 23:47:03 [logger.py:42] Received request cmpl-8751c82f74da4dd59ebdb60c3a92f5a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:03 [async_llm.py:261] Added request cmpl-8751c82f74da4dd59ebdb60c3a92f5a2-0.
INFO 03-01 23:47:04 [logger.py:42] Received request cmpl-305071ed5a2348ebaefa269d15b8f2a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:04 [async_llm.py:261] Added request cmpl-305071ed5a2348ebaefa269d15b8f2a6-0.
INFO 03-01 23:47:05 [logger.py:42] Received request cmpl-1d3307649c3347b3aec79c601d3b83d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:05 [async_llm.py:261] Added request cmpl-1d3307649c3347b3aec79c601d3b83d4-0.
INFO 03-01 23:47:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:47:06 [logger.py:42] Received request cmpl-fa3bbaf9e7064f53a060a3787feccbe0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:06 [async_llm.py:261] Added request cmpl-fa3bbaf9e7064f53a060a3787feccbe0-0.
INFO 03-01 23:47:07 [logger.py:42] Received request cmpl-297e76d49cb5473aa6e87523bbaf46b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:07 [async_llm.py:261] Added request cmpl-297e76d49cb5473aa6e87523bbaf46b7-0.
INFO 03-01 23:47:08 [logger.py:42] Received request cmpl-fe5408e099dd4feea1f57da3042129db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:08 [async_llm.py:261] Added request cmpl-fe5408e099dd4feea1f57da3042129db-0.
INFO 03-01 23:47:09 [logger.py:42] Received request cmpl-43ec23b2aabd46c58fb06ea4870db1a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:09 [async_llm.py:261] Added request cmpl-43ec23b2aabd46c58fb06ea4870db1a3-0.
INFO 03-01 23:47:11 [logger.py:42] Received request cmpl-6791e9212f5342f68915e0295876d4b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:11 [async_llm.py:261] Added request cmpl-6791e9212f5342f68915e0295876d4b3-0.
INFO 03-01 23:47:12 [logger.py:42] Received request cmpl-f907084c44174ce1b0bc35a429a1a114-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:12 [async_llm.py:261] Added request cmpl-f907084c44174ce1b0bc35a429a1a114-0.
INFO 03-01 23:47:13 [logger.py:42] Received request cmpl-3245bd3861f648209c86e843f4d8a565-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:13 [async_llm.py:261] Added request cmpl-3245bd3861f648209c86e843f4d8a565-0.
INFO 03-01 23:47:14 [logger.py:42] Received request cmpl-ed25fc31f30f4c68b8c3cafda8c3bd04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:14 [async_llm.py:261] Added request cmpl-ed25fc31f30f4c68b8c3cafda8c3bd04-0.
INFO 03-01 23:47:15 [logger.py:42] Received request cmpl-02821b4f1cda46f5980e9f73847de4d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:15 [async_llm.py:261] Added request cmpl-02821b4f1cda46f5980e9f73847de4d5-0.
INFO 03-01 23:47:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:47:16 [logger.py:42] Received request cmpl-ae2820afd5c043e88028fe3eccfbecf6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:16 [async_llm.py:261] Added request cmpl-ae2820afd5c043e88028fe3eccfbecf6-0.
INFO 03-01 23:47:17 [logger.py:42] Received request cmpl-05e67f13e8b54daa98420748e80a2d02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:17 [async_llm.py:261] Added request cmpl-05e67f13e8b54daa98420748e80a2d02-0.
INFO 03-01 23:47:18 [logger.py:42] Received request cmpl-8947a454bdb34609a347341447b31024-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:18 [async_llm.py:261] Added request cmpl-8947a454bdb34609a347341447b31024-0.
INFO 03-01 23:47:19 [logger.py:42] Received request cmpl-d67fbdde40f545ebb63152034a5e7958-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:19 [async_llm.py:261] Added request cmpl-d67fbdde40f545ebb63152034a5e7958-0.
INFO 03-01 23:47:20 [logger.py:42] Received request cmpl-9b1c65a354264736a08b8645a11b98cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:20 [async_llm.py:261] Added request cmpl-9b1c65a354264736a08b8645a11b98cb-0.
INFO 03-01 23:47:21 [logger.py:42] Received request cmpl-763753e2c4974289a4d6f49d2c80680d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:21 [async_llm.py:261] Added request cmpl-763753e2c4974289a4d6f49d2c80680d-0.
INFO 03-01 23:47:22 [logger.py:42] Received request cmpl-ae99abc90e7a4ebf8bfbb753a9e3d338-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:22 [async_llm.py:261] Added request cmpl-ae99abc90e7a4ebf8bfbb753a9e3d338-0.
INFO 03-01 23:47:24 [logger.py:42] Received request cmpl-e7f6379233514cdda0355c7f45bf19ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:24 [async_llm.py:261] Added request cmpl-e7f6379233514cdda0355c7f45bf19ce-0.
INFO 03-01 23:47:25 [logger.py:42] Received request cmpl-8d117da2c2db426a8cd02bd2386808fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:25 [async_llm.py:261] Added request cmpl-8d117da2c2db426a8cd02bd2386808fa-0.
INFO 03-01 23:47:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:47:26 [logger.py:42] Received request cmpl-0808bc41a1a54936ae111b52dc4b4e37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:26 [async_llm.py:261] Added request cmpl-0808bc41a1a54936ae111b52dc4b4e37-0.
INFO 03-01 23:47:27 [logger.py:42] Received request cmpl-246975b7254945dcb524c0498b4d7fb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:27 [async_llm.py:261] Added request cmpl-246975b7254945dcb524c0498b4d7fb9-0.
INFO 03-01 23:47:28 [logger.py:42] Received request cmpl-63d377de7b0e4523aaca32979b68377e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:28 [async_llm.py:261] Added request cmpl-63d377de7b0e4523aaca32979b68377e-0.
INFO 03-01 23:47:29 [logger.py:42] Received request cmpl-a0d91b38d38d4c96a6ad0ce3fdc97e89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:29 [async_llm.py:261] Added request cmpl-a0d91b38d38d4c96a6ad0ce3fdc97e89-0.
INFO 03-01 23:47:30 [logger.py:42] Received request cmpl-969dec8cafc34dbd9af43c3a34ed271c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:30 [async_llm.py:261] Added request cmpl-969dec8cafc34dbd9af43c3a34ed271c-0.
INFO 03-01 23:47:31 [logger.py:42] Received request cmpl-764336e600374646b5a8833f8fe1d540-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:31 [async_llm.py:261] Added request cmpl-764336e600374646b5a8833f8fe1d540-0.
INFO 03-01 23:47:32 [logger.py:42] Received request cmpl-139af05d2f0648d18da541037c7a8951-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:32 [async_llm.py:261] Added request cmpl-139af05d2f0648d18da541037c7a8951-0.
INFO 03-01 23:47:33 [logger.py:42] Received request cmpl-0cdcdcde08c24dd0924e37fd4d7919ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:33 [async_llm.py:261] Added request cmpl-0cdcdcde08c24dd0924e37fd4d7919ee-0.
INFO 03-01 23:47:34 [logger.py:42] Received request cmpl-0e9ceff35c1c439bbd81a7a0b1dd14cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:34 [async_llm.py:261] Added request cmpl-0e9ceff35c1c439bbd81a7a0b1dd14cf-0.
INFO 03-01 23:47:35 [logger.py:42] Received request cmpl-300470b6ac8541d5bf6e60a5c9eb06ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:35 [async_llm.py:261] Added request cmpl-300470b6ac8541d5bf6e60a5c9eb06ce-0.
INFO 03-01 23:47:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:47:37 [logger.py:42] Received request cmpl-3c7e4ad860d14429b98653836f488cef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:37 [async_llm.py:261] Added request cmpl-3c7e4ad860d14429b98653836f488cef-0.
INFO 03-01 23:47:38 [logger.py:42] Received request cmpl-a4b4b4fb36e6483f9f31ccd8738e1a15-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:38 [async_llm.py:261] Added request cmpl-a4b4b4fb36e6483f9f31ccd8738e1a15-0.
INFO 03-01 23:47:39 [logger.py:42] Received request cmpl-ceb65a14ca0849fa8b54a45627effc49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:39 [async_llm.py:261] Added request cmpl-ceb65a14ca0849fa8b54a45627effc49-0.
INFO 03-01 23:47:40 [logger.py:42] Received request cmpl-14dd6a42fef74995b7f887a0ee95cd75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:40 [async_llm.py:261] Added request cmpl-14dd6a42fef74995b7f887a0ee95cd75-0.
INFO 03-01 23:47:41 [logger.py:42] Received request cmpl-f6e08557169b4ca4b2c42e4042474b67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:41 [async_llm.py:261] Added request cmpl-f6e08557169b4ca4b2c42e4042474b67-0.
INFO 03-01 23:47:42 [logger.py:42] Received request cmpl-977d414c9b7e4943b23d40ee46d2cda0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:42 [async_llm.py:261] Added request cmpl-977d414c9b7e4943b23d40ee46d2cda0-0.
INFO 03-01 23:47:43 [logger.py:42] Received request cmpl-68ad440140b94e21a12bc2a14c938930-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:43 [async_llm.py:261] Added request cmpl-68ad440140b94e21a12bc2a14c938930-0.
INFO 03-01 23:47:44 [logger.py:42] Received request cmpl-46a5d0c87ca540a5a8ad5adc905d284a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:44 [async_llm.py:261] Added request cmpl-46a5d0c87ca540a5a8ad5adc905d284a-0.
INFO 03-01 23:47:45 [logger.py:42] Received request cmpl-d8a4f9ecb87b48a68b690e4a00bef188-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:45 [async_llm.py:261] Added request cmpl-d8a4f9ecb87b48a68b690e4a00bef188-0.
INFO 03-01 23:47:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:47:46 [logger.py:42] Received request cmpl-751959b2504943948f4f1efba80fd9d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:46 [async_llm.py:261] Added request cmpl-751959b2504943948f4f1efba80fd9d2-0.
INFO 03-01 23:47:47 [logger.py:42] Received request cmpl-b8712f9a47e049fa895b051c77f67b98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:47 [async_llm.py:261] Added request cmpl-b8712f9a47e049fa895b051c77f67b98-0.
INFO 03-01 23:47:48 [logger.py:42] Received request cmpl-516454e1b58e47068101ea44ac487af1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:48 [async_llm.py:261] Added request cmpl-516454e1b58e47068101ea44ac487af1-0.
INFO 03-01 23:47:50 [logger.py:42] Received request cmpl-719bbf0254b74d04815bfe8ac469d24f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:50 [async_llm.py:261] Added request cmpl-719bbf0254b74d04815bfe8ac469d24f-0.
INFO 03-01 23:47:51 [logger.py:42] Received request cmpl-7ac1258f434a44519113bbc141cb1c21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:51 [async_llm.py:261] Added request cmpl-7ac1258f434a44519113bbc141cb1c21-0.
INFO 03-01 23:47:52 [logger.py:42] Received request cmpl-dace6326eebc4e13b32ca81052d4af49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:52 [async_llm.py:261] Added request cmpl-dace6326eebc4e13b32ca81052d4af49-0.
INFO 03-01 23:47:53 [logger.py:42] Received request cmpl-72434de5e60442238c2c2e0a1dddc932-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:53 [async_llm.py:261] Added request cmpl-72434de5e60442238c2c2e0a1dddc932-0.
INFO 03-01 23:47:54 [logger.py:42] Received request cmpl-18ee6f91812c49b199173422ffdf67f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:54 [async_llm.py:261] Added request cmpl-18ee6f91812c49b199173422ffdf67f7-0.
INFO 03-01 23:47:55 [logger.py:42] Received request cmpl-8853229db9804690bd5667f7745eef33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:55 [async_llm.py:261] Added request cmpl-8853229db9804690bd5667f7745eef33-0.
INFO 03-01 23:47:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:47:56 [logger.py:42] Received request cmpl-96f81ab9912140f4b3ccc62871ff49ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:56 [async_llm.py:261] Added request cmpl-96f81ab9912140f4b3ccc62871ff49ce-0.
INFO 03-01 23:47:57 [logger.py:42] Received request cmpl-7a3db85c33184c43acc7870c341770ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:57 [async_llm.py:261] Added request cmpl-7a3db85c33184c43acc7870c341770ba-0.
INFO 03-01 23:47:58 [logger.py:42] Received request cmpl-d043fd1858664c6987e1789a5a868d03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:58 [async_llm.py:261] Added request cmpl-d043fd1858664c6987e1789a5a868d03-0.
INFO 03-01 23:47:59 [logger.py:42] Received request cmpl-f7de41b9bbbd4c3dbb571e1aafad793d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:59 [async_llm.py:261] Added request cmpl-f7de41b9bbbd4c3dbb571e1aafad793d-0.
INFO 03-01 23:48:00 [logger.py:42] Received request cmpl-88dc1bcd9491418e87f20ccc9b1016e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:00 [async_llm.py:261] Added request cmpl-88dc1bcd9491418e87f20ccc9b1016e9-0.
INFO 03-01 23:48:01 [logger.py:42] Received request cmpl-2c454d9f514f4b4886a71bf97a5d2365-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:01 [async_llm.py:261] Added request cmpl-2c454d9f514f4b4886a71bf97a5d2365-0.
INFO 03-01 23:48:03 [logger.py:42] Received request cmpl-cf168d2418384ee6b454d6b6a04f6ad8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:03 [async_llm.py:261] Added request cmpl-cf168d2418384ee6b454d6b6a04f6ad8-0.
INFO 03-01 23:48:04 [logger.py:42] Received request cmpl-3953fb3cae024425815f03189c340e5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:04 [async_llm.py:261] Added request cmpl-3953fb3cae024425815f03189c340e5b-0.
INFO 03-01 23:48:05 [logger.py:42] Received request cmpl-822fb36ee1a84424b51a92b92f9a0051-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:05 [async_llm.py:261] Added request cmpl-822fb36ee1a84424b51a92b92f9a0051-0.
INFO 03-01 23:48:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:48:06 [logger.py:42] Received request cmpl-0013cb3880414edca29b61b9b5b25672-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:06 [async_llm.py:261] Added request cmpl-0013cb3880414edca29b61b9b5b25672-0.
INFO 03-01 23:48:07 [logger.py:42] Received request cmpl-ab91777164e146c48ea34cd23b5b20ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:07 [async_llm.py:261] Added request cmpl-ab91777164e146c48ea34cd23b5b20ab-0.
INFO 03-01 23:48:08 [logger.py:42] Received request cmpl-ee246a897b654c169d0212b6fa1ac4c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:08 [async_llm.py:261] Added request cmpl-ee246a897b654c169d0212b6fa1ac4c4-0.
INFO 03-01 23:48:09 [logger.py:42] Received request cmpl-744046125dd147299bf62e42e504d853-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:09 [async_llm.py:261] Added request cmpl-744046125dd147299bf62e42e504d853-0.
INFO 03-01 23:48:10 [logger.py:42] Received request cmpl-09ad352b38884a609940f8092357a259-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:10 [async_llm.py:261] Added request cmpl-09ad352b38884a609940f8092357a259-0.
INFO 03-01 23:48:11 [logger.py:42] Received request cmpl-4b677e42a92a4d03bb563e626d95cb3e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:11 [async_llm.py:261] Added request cmpl-4b677e42a92a4d03bb563e626d95cb3e-0.
INFO 03-01 23:48:12 [logger.py:42] Received request cmpl-a6d5091c83bd48699400c4eb739cdd11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:12 [async_llm.py:261] Added request cmpl-a6d5091c83bd48699400c4eb739cdd11-0.
INFO 03-01 23:48:13 [logger.py:42] Received request cmpl-eb777fa8dac847728c42f0b8c435c79e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:13 [async_llm.py:261] Added request cmpl-eb777fa8dac847728c42f0b8c435c79e-0.
INFO 03-01 23:48:14 [logger.py:42] Received request cmpl-b1020f0ad10543749cf2b8b0ce314c07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:14 [async_llm.py:261] Added request cmpl-b1020f0ad10543749cf2b8b0ce314c07-0.
INFO 03-01 23:48:16 [logger.py:42] Received request cmpl-239cb0c98e924c3f84e13319d618c68d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:16 [async_llm.py:261] Added request cmpl-239cb0c98e924c3f84e13319d618c68d-0.
INFO 03-01 23:48:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:48:17 [logger.py:42] Received request cmpl-6523117e724448e3b3a8e8a505850a7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:17 [async_llm.py:261] Added request cmpl-6523117e724448e3b3a8e8a505850a7b-0.
INFO 03-01 23:48:18 [logger.py:42] Received request cmpl-d41556d50abb4454b6e519ca4e203755-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:18 [async_llm.py:261] Added request cmpl-d41556d50abb4454b6e519ca4e203755-0.
INFO 03-01 23:48:19 [logger.py:42] Received request cmpl-7f6a95dbe6664f4da75dc78d102f6f9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:19 [async_llm.py:261] Added request cmpl-7f6a95dbe6664f4da75dc78d102f6f9d-0.
INFO 03-01 23:48:20 [logger.py:42] Received request cmpl-ef301e5fec284707b1153a72c9d179ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:20 [async_llm.py:261] Added request cmpl-ef301e5fec284707b1153a72c9d179ab-0.
INFO 03-01 23:48:21 [logger.py:42] Received request cmpl-27ccad46123e4d69b758550390753930-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:21 [async_llm.py:261] Added request cmpl-27ccad46123e4d69b758550390753930-0.
INFO 03-01 23:48:22 [logger.py:42] Received request cmpl-c1ee8184b6ea4d62b466b54c851be226-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:22 [async_llm.py:261] Added request cmpl-c1ee8184b6ea4d62b466b54c851be226-0.
INFO 03-01 23:48:23 [logger.py:42] Received request cmpl-24d95252cb494b0d8df55331dd582a6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:23 [async_llm.py:261] Added request cmpl-24d95252cb494b0d8df55331dd582a6a-0.
INFO 03-01 23:48:24 [logger.py:42] Received request cmpl-7a5a3cb87c434733b42d3a7dc32e63e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:24 [async_llm.py:261] Added request cmpl-7a5a3cb87c434733b42d3a7dc32e63e1-0.
INFO 03-01 23:48:25 [logger.py:42] Received request cmpl-114241063c884d758e0d73e5d7dffa3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:25 [async_llm.py:261] Added request cmpl-114241063c884d758e0d73e5d7dffa3a-0.
INFO 03-01 23:48:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:48:26 [logger.py:42] Received request cmpl-802caed5827741139b784312f6c5587d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:26 [async_llm.py:261] Added request cmpl-802caed5827741139b784312f6c5587d-0.
INFO 03-01 23:48:27 [logger.py:42] Received request cmpl-4489e3a4eb1947f480f7d50c7432ea73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:27 [async_llm.py:261] Added request cmpl-4489e3a4eb1947f480f7d50c7432ea73-0.
INFO 03-01 23:48:29 [logger.py:42] Received request cmpl-baa8c644351248609c478608602c143d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:29 [async_llm.py:261] Added request cmpl-baa8c644351248609c478608602c143d-0.
INFO 03-01 23:48:30 [logger.py:42] Received request cmpl-29e8c8373c934699b546abe8fee426c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:30 [async_llm.py:261] Added request cmpl-29e8c8373c934699b546abe8fee426c8-0.
INFO 03-01 23:48:31 [logger.py:42] Received request cmpl-f732d2ba4f0a41c296448bc865ea22df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:31 [async_llm.py:261] Added request cmpl-f732d2ba4f0a41c296448bc865ea22df-0.
INFO 03-01 23:48:32 [logger.py:42] Received request cmpl-560de1dfa6624e3a8713a5c8db494275-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:32 [async_llm.py:261] Added request cmpl-560de1dfa6624e3a8713a5c8db494275-0.
INFO 03-01 23:48:33 [logger.py:42] Received request cmpl-3488b6c76ee14083a9dd9bbab28b078d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:33 [async_llm.py:261] Added request cmpl-3488b6c76ee14083a9dd9bbab28b078d-0.
INFO 03-01 23:48:34 [logger.py:42] Received request cmpl-366ec1f53d4a4690bbecac4291f469bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:34 [async_llm.py:261] Added request cmpl-366ec1f53d4a4690bbecac4291f469bc-0.
INFO 03-01 23:48:35 [logger.py:42] Received request cmpl-8bb9c505c2b340adb0753ccaeca34789-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:35 [async_llm.py:261] Added request cmpl-8bb9c505c2b340adb0753ccaeca34789-0.
INFO 03-01 23:48:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:48:36 [logger.py:42] Received request cmpl-c299288318c64eeca9145127a4d70a03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:36 [async_llm.py:261] Added request cmpl-c299288318c64eeca9145127a4d70a03-0.
INFO 03-01 23:48:37 [logger.py:42] Received request cmpl-c6381c3a363d4412ace6a750dd76afd9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:37 [async_llm.py:261] Added request cmpl-c6381c3a363d4412ace6a750dd76afd9-0.
INFO 03-01 23:48:38 [logger.py:42] Received request cmpl-7bf059dec809430c94017d882de8bc6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:38 [async_llm.py:261] Added request cmpl-7bf059dec809430c94017d882de8bc6a-0.
INFO 03-01 23:48:39 [logger.py:42] Received request cmpl-0bf129e961ee4d45b8bdfd50953e9a40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:39 [async_llm.py:261] Added request cmpl-0bf129e961ee4d45b8bdfd50953e9a40-0.
INFO 03-01 23:48:40 [logger.py:42] Received request cmpl-2264248017874b8ba5ee64814f6e90e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:40 [async_llm.py:261] Added request cmpl-2264248017874b8ba5ee64814f6e90e3-0.
INFO 03-01 23:48:42 [logger.py:42] Received request cmpl-02de46662d2e4fc2940160c5e52459fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:42 [async_llm.py:261] Added request cmpl-02de46662d2e4fc2940160c5e52459fb-0.
INFO 03-01 23:48:43 [logger.py:42] Received request cmpl-314ed034b3cf4d0aac7734faa83c1ef4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:43 [async_llm.py:261] Added request cmpl-314ed034b3cf4d0aac7734faa83c1ef4-0.
INFO 03-01 23:48:44 [logger.py:42] Received request cmpl-09ec9fdfe10547cabe25432f0829d73b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:44 [async_llm.py:261] Added request cmpl-09ec9fdfe10547cabe25432f0829d73b-0.
INFO 03-01 23:48:45 [logger.py:42] Received request cmpl-23c4c84f9d0a4e3a9b15ca3384c51927-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:45 [async_llm.py:261] Added request cmpl-23c4c84f9d0a4e3a9b15ca3384c51927-0.
INFO 03-01 23:48:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:48:46 [logger.py:42] Received request cmpl-1c16dc64a45849448cf923434f0a6a7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:46 [async_llm.py:261] Added request cmpl-1c16dc64a45849448cf923434f0a6a7b-0.
INFO 03-01 23:48:47 [logger.py:42] Received request cmpl-97f280e81d9a4694a37ffbb7e3149c6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:47 [async_llm.py:261] Added request cmpl-97f280e81d9a4694a37ffbb7e3149c6d-0.
INFO 03-01 23:48:48 [logger.py:42] Received request cmpl-4419eaffe5d5471290d3043519a80d71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:48 [async_llm.py:261] Added request cmpl-4419eaffe5d5471290d3043519a80d71-0.
INFO 03-01 23:48:49 [logger.py:42] Received request cmpl-9e638eac46604abc89de75348e0252c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:49 [async_llm.py:261] Added request cmpl-9e638eac46604abc89de75348e0252c9-0.
INFO 03-01 23:48:50 [logger.py:42] Received request cmpl-5d7000aba4cb41fbb13f26f74c8bff26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:50 [async_llm.py:261] Added request cmpl-5d7000aba4cb41fbb13f26f74c8bff26-0.
INFO 03-01 23:48:51 [logger.py:42] Received request cmpl-3292051d785142fe9be005d52c42a583-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:51 [async_llm.py:261] Added request cmpl-3292051d785142fe9be005d52c42a583-0.
INFO 03-01 23:48:52 [logger.py:42] Received request cmpl-6088b7f634cf4ec0877368b3ea3b3dc8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:52 [async_llm.py:261] Added request cmpl-6088b7f634cf4ec0877368b3ea3b3dc8-0.
INFO 03-01 23:48:53 [logger.py:42] Received request cmpl-11c882d958d7429083f35b2b301a3e13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:53 [async_llm.py:261] Added request cmpl-11c882d958d7429083f35b2b301a3e13-0.
INFO 03-01 23:48:55 [logger.py:42] Received request cmpl-54912890dfa24edeaa97c0304bd263b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:55 [async_llm.py:261] Added request cmpl-54912890dfa24edeaa97c0304bd263b3-0.
INFO 03-01 23:48:56 [logger.py:42] Received request cmpl-126673921d8a499dbf6b863973a26eb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:56 [async_llm.py:261] Added request cmpl-126673921d8a499dbf6b863973a26eb8-0.
INFO 03-01 23:48:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-01 23:48:57 [logger.py:42] Received request cmpl-878bf608dcf842fbaa027e51a3e7dbf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:57 [async_llm.py:261] Added request cmpl-878bf608dcf842fbaa027e51a3e7dbf4-0.
INFO 03-01 23:48:58 [logger.py:42] Received request cmpl-54db9f612b3c42a0ac6cbffc110f8c20-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:58 [async_llm.py:261] Added request cmpl-54db9f612b3c42a0ac6cbffc110f8c20-0.
INFO 03-01 23:48:59 [logger.py:42] Received request cmpl-45100a0b79be462ea94d1fe249f9ec96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:59 [async_llm.py:261] Added request cmpl-45100a0b79be462ea94d1fe249f9ec96-0.
INFO 03-01 23:49:00 [logger.py:42] Received request cmpl-d0de8214a5914179950dc1ff06c10e3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:00 [async_llm.py:261] Added request cmpl-d0de8214a5914179950dc1ff06c10e3f-0.
INFO 03-01 23:49:01 [logger.py:42] Received request cmpl-685b6aae05b54c2794e057444875c0f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:01 [async_llm.py:261] Added request cmpl-685b6aae05b54c2794e057444875c0f7-0.
INFO 03-01 23:49:02 [logger.py:42] Received request cmpl-e06c0f7f9203474aa35789482bb97374-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:02 [async_llm.py:261] Added request cmpl-e06c0f7f9203474aa35789482bb97374-0.
INFO 03-01 23:49:03 [logger.py:42] Received request cmpl-e85e1b60f7bb480aafb6ed731c526081-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:03 [async_llm.py:261] Added request cmpl-e85e1b60f7bb480aafb6ed731c526081-0.
INFO 03-01 23:49:04 [logger.py:42] Received request cmpl-8c5ce0f7c80c4edc8918179459ff8247-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:04 [async_llm.py:261] Added request cmpl-8c5ce0f7c80c4edc8918179459ff8247-0.
INFO 03-01 23:49:05 [logger.py:42] Received request cmpl-dc9779630cb04593959dd57c370be7ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:05 [async_llm.py:261] Added request cmpl-dc9779630cb04593959dd57c370be7ed-0.
INFO 03-01 23:49:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:49:06 [logger.py:42] Received request cmpl-9de77e5c1b0a471d9094fddb49fb2da4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:06 [async_llm.py:261] Added request cmpl-9de77e5c1b0a471d9094fddb49fb2da4-0.
INFO 03-01 23:49:08 [logger.py:42] Received request cmpl-f6d74b1b89054bdc9935864f3133facf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:08 [async_llm.py:261] Added request cmpl-f6d74b1b89054bdc9935864f3133facf-0.
INFO 03-01 23:49:09 [logger.py:42] Received request cmpl-1ee976ba78264b9b8940bbe95b9e80dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:09 [async_llm.py:261] Added request cmpl-1ee976ba78264b9b8940bbe95b9e80dd-0.
INFO 03-01 23:49:10 [logger.py:42] Received request cmpl-0a4a8e839f154999bdcf8febe21dc274-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:10 [async_llm.py:261] Added request cmpl-0a4a8e839f154999bdcf8febe21dc274-0.
INFO 03-01 23:49:11 [logger.py:42] Received request cmpl-0f54345a7f4f4a7d880f3d9f559c39ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:11 [async_llm.py:261] Added request cmpl-0f54345a7f4f4a7d880f3d9f559c39ec-0.
INFO 03-01 23:49:12 [logger.py:42] Received request cmpl-748b4bc57a5c4d96bc140679009f98b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:12 [async_llm.py:261] Added request cmpl-748b4bc57a5c4d96bc140679009f98b0-0.
INFO 03-01 23:49:13 [logger.py:42] Received request cmpl-89414c47eab749b2990068fc2d5d8033-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:13 [async_llm.py:261] Added request cmpl-89414c47eab749b2990068fc2d5d8033-0.
INFO 03-01 23:49:14 [logger.py:42] Received request cmpl-51ea4df869444b6d82eee5ae18603970-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:14 [async_llm.py:261] Added request cmpl-51ea4df869444b6d82eee5ae18603970-0.
INFO 03-01 23:49:15 [logger.py:42] Received request cmpl-be5da4cd60c74e42ae452592581f680f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:15 [async_llm.py:261] Added request cmpl-be5da4cd60c74e42ae452592581f680f-0.
INFO 03-01 23:49:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:49:16 [logger.py:42] Received request cmpl-71e99f90aab64b9da0bd7fa0da1cba12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:16 [async_llm.py:261] Added request cmpl-71e99f90aab64b9da0bd7fa0da1cba12-0.
INFO 03-01 23:49:17 [logger.py:42] Received request cmpl-ad49a11b5d314efbb0382e8cb6fa0c68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:17 [async_llm.py:261] Added request cmpl-ad49a11b5d314efbb0382e8cb6fa0c68-0.
INFO 03-01 23:49:18 [logger.py:42] Received request cmpl-b4a5d3c764914c98971c5d464ed38f2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:18 [async_llm.py:261] Added request cmpl-b4a5d3c764914c98971c5d464ed38f2b-0.
INFO 03-01 23:49:19 [logger.py:42] Received request cmpl-e9d59484e4714cc39ab50e07881acda8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:19 [async_llm.py:261] Added request cmpl-e9d59484e4714cc39ab50e07881acda8-0.
INFO 03-01 23:49:21 [logger.py:42] Received request cmpl-494fa670c6674c3b8c855c61d0227da1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:21 [async_llm.py:261] Added request cmpl-494fa670c6674c3b8c855c61d0227da1-0.
INFO 03-01 23:49:22 [logger.py:42] Received request cmpl-6b5f99649f004c40a2ea5b186bffb309-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:22 [async_llm.py:261] Added request cmpl-6b5f99649f004c40a2ea5b186bffb309-0.
INFO 03-01 23:49:23 [logger.py:42] Received request cmpl-3f427cc29bf7406ab82bcdba2338e13f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:23 [async_llm.py:261] Added request cmpl-3f427cc29bf7406ab82bcdba2338e13f-0.
INFO 03-01 23:49:24 [logger.py:42] Received request cmpl-96f05ecd2a5647d4bea89b82a272c3ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:24 [async_llm.py:261] Added request cmpl-96f05ecd2a5647d4bea89b82a272c3ed-0.
INFO 03-01 23:49:25 [logger.py:42] Received request cmpl-b7f901f0368149da811e9a6895592fbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:25 [async_llm.py:261] Added request cmpl-b7f901f0368149da811e9a6895592fbd-0.
INFO 03-01 23:49:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:49:26 [logger.py:42] Received request cmpl-ea9ba99d2e004e2c9b6b50a0091299b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:26 [async_llm.py:261] Added request cmpl-ea9ba99d2e004e2c9b6b50a0091299b2-0.
INFO 03-01 23:49:27 [logger.py:42] Received request cmpl-15fe8e0b36e647a182530ae018459309-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:27 [async_llm.py:261] Added request cmpl-15fe8e0b36e647a182530ae018459309-0.
INFO 03-01 23:49:28 [logger.py:42] Received request cmpl-c357ef3e678d48a39f77085d98ee7795-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:28 [async_llm.py:261] Added request cmpl-c357ef3e678d48a39f77085d98ee7795-0.
INFO 03-01 23:49:29 [logger.py:42] Received request cmpl-4f724629f87d426db0fa425f55fdf785-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:29 [async_llm.py:261] Added request cmpl-4f724629f87d426db0fa425f55fdf785-0.
INFO 03-01 23:49:30 [logger.py:42] Received request cmpl-cc55acbfd82249da9949a9a0a15b6662-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:30 [async_llm.py:261] Added request cmpl-cc55acbfd82249da9949a9a0a15b6662-0.
INFO 03-01 23:49:31 [logger.py:42] Received request cmpl-31124213a8af4977a5514a69dbbab153-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:31 [async_llm.py:261] Added request cmpl-31124213a8af4977a5514a69dbbab153-0.
INFO 03-01 23:49:32 [logger.py:42] Received request cmpl-f3e013e199ce4d89b9cc5a2005279a7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:32 [async_llm.py:261] Added request cmpl-f3e013e199ce4d89b9cc5a2005279a7b-0.
INFO 03-01 23:49:34 [logger.py:42] Received request cmpl-82bd78f70f4f49e09b600b0bf83c3a27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:34 [async_llm.py:261] Added request cmpl-82bd78f70f4f49e09b600b0bf83c3a27-0.
INFO 03-01 23:49:35 [logger.py:42] Received request cmpl-47fe101420284cc2b7a25e7d7b12fbdc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:35 [async_llm.py:261] Added request cmpl-47fe101420284cc2b7a25e7d7b12fbdc-0.
INFO 03-01 23:49:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:49:36 [logger.py:42] Received request cmpl-d8e3dbc63c4e40799c17c0fde0e7d7a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:36 [async_llm.py:261] Added request cmpl-d8e3dbc63c4e40799c17c0fde0e7d7a0-0.
INFO 03-01 23:49:37 [logger.py:42] Received request cmpl-ac0bb509b6f74f869eb12fe983052d2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:37 [async_llm.py:261] Added request cmpl-ac0bb509b6f74f869eb12fe983052d2f-0.
INFO 03-01 23:49:38 [logger.py:42] Received request cmpl-00cf027bb0f147d6a89f28c5aed38580-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:38 [async_llm.py:261] Added request cmpl-00cf027bb0f147d6a89f28c5aed38580-0.
INFO 03-01 23:49:39 [logger.py:42] Received request cmpl-6f3db0ef87cb4baab3e616c2b0fb7e59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:39 [async_llm.py:261] Added request cmpl-6f3db0ef87cb4baab3e616c2b0fb7e59-0.
INFO 03-01 23:49:40 [logger.py:42] Received request cmpl-f671f311e25142daaca82e6aeadbe007-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:40 [async_llm.py:261] Added request cmpl-f671f311e25142daaca82e6aeadbe007-0.
INFO 03-01 23:49:41 [logger.py:42] Received request cmpl-4ffa08d53c274a0e974ac29a80a2b0e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:41 [async_llm.py:261] Added request cmpl-4ffa08d53c274a0e974ac29a80a2b0e9-0.
INFO 03-01 23:49:42 [logger.py:42] Received request cmpl-b9957c463d4540bdbb5b6d33130eb2f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:42 [async_llm.py:261] Added request cmpl-b9957c463d4540bdbb5b6d33130eb2f4-0.
INFO 03-01 23:49:43 [logger.py:42] Received request cmpl-0ed3dd63d3b844a2bb90a66072ba45de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:43 [async_llm.py:261] Added request cmpl-0ed3dd63d3b844a2bb90a66072ba45de-0.
INFO 03-01 23:49:44 [logger.py:42] Received request cmpl-94f7fc48ded447288311d90897ac88da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:44 [async_llm.py:261] Added request cmpl-94f7fc48ded447288311d90897ac88da-0.
INFO 03-01 23:49:45 [logger.py:42] Received request cmpl-a974a0b0b3ec488483975e450c5de05c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:45 [async_llm.py:261] Added request cmpl-a974a0b0b3ec488483975e450c5de05c-0.
INFO 03-01 23:49:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:49:47 [logger.py:42] Received request cmpl-8c47d7b3a20244a89b651aa389a9fcb1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:47 [async_llm.py:261] Added request cmpl-8c47d7b3a20244a89b651aa389a9fcb1-0.
INFO 03-01 23:49:48 [logger.py:42] Received request cmpl-f3cebb2317234ba7bb7d3d4d7863b075-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:48 [async_llm.py:261] Added request cmpl-f3cebb2317234ba7bb7d3d4d7863b075-0.
INFO 03-01 23:49:49 [logger.py:42] Received request cmpl-27d255af3a7d4cc5a76645b60eb63920-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:49 [async_llm.py:261] Added request cmpl-27d255af3a7d4cc5a76645b60eb63920-0.
INFO 03-01 23:49:50 [logger.py:42] Received request cmpl-699498c1cd6346f3ad929cdaaa23d515-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:50 [async_llm.py:261] Added request cmpl-699498c1cd6346f3ad929cdaaa23d515-0.
INFO 03-01 23:49:51 [logger.py:42] Received request cmpl-3efe38e9bb0d4c1aa62f914717112c2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:51 [async_llm.py:261] Added request cmpl-3efe38e9bb0d4c1aa62f914717112c2f-0.
INFO 03-01 23:49:52 [logger.py:42] Received request cmpl-818912362420483abc5b5ea019bf758d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:52 [async_llm.py:261] Added request cmpl-818912362420483abc5b5ea019bf758d-0.
INFO 03-01 23:49:53 [logger.py:42] Received request cmpl-318f861182aa4bfa9163aa7a2b05b000-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:53 [async_llm.py:261] Added request cmpl-318f861182aa4bfa9163aa7a2b05b000-0.
INFO 03-01 23:49:54 [logger.py:42] Received request cmpl-137fb4c675ca45f986b51864446bea28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:54 [async_llm.py:261] Added request cmpl-137fb4c675ca45f986b51864446bea28-0.
INFO 03-01 23:49:55 [logger.py:42] Received request cmpl-d826aeb2239b47049caaa50897a7671c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:55 [async_llm.py:261] Added request cmpl-d826aeb2239b47049caaa50897a7671c-0.
INFO 03-01 23:49:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:49:56 [logger.py:42] Received request cmpl-099905fa579a4035bc864072a00cc9b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:56 [async_llm.py:261] Added request cmpl-099905fa579a4035bc864072a00cc9b4-0.
INFO 03-01 23:49:57 [logger.py:42] Received request cmpl-d306f21628614d6c9cc2c65076cdd35c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:57 [async_llm.py:261] Added request cmpl-d306f21628614d6c9cc2c65076cdd35c-0.
INFO 03-01 23:49:58 [logger.py:42] Received request cmpl-7eb60da83ab64f70b2f7c82f9427b60f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:58 [async_llm.py:261] Added request cmpl-7eb60da83ab64f70b2f7c82f9427b60f-0.
INFO 03-01 23:50:00 [logger.py:42] Received request cmpl-315d942230ea4858beddad88830c4eaa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:00 [async_llm.py:261] Added request cmpl-315d942230ea4858beddad88830c4eaa-0.
INFO 03-01 23:50:01 [logger.py:42] Received request cmpl-d4ff54b0e7c44b0caca5219dda9702f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:01 [async_llm.py:261] Added request cmpl-d4ff54b0e7c44b0caca5219dda9702f3-0.
INFO 03-01 23:50:02 [logger.py:42] Received request cmpl-55ec05d71da346d1a665a08b9d6fb4ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:02 [async_llm.py:261] Added request cmpl-55ec05d71da346d1a665a08b9d6fb4ec-0.
INFO 03-01 23:50:03 [logger.py:42] Received request cmpl-426936f66b3a45e2bfa8f748958328f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:03 [async_llm.py:261] Added request cmpl-426936f66b3a45e2bfa8f748958328f5-0.
INFO 03-01 23:50:04 [logger.py:42] Received request cmpl-45ffb91f43fe4847a7bf4a6a7ee4baa2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:04 [async_llm.py:261] Added request cmpl-45ffb91f43fe4847a7bf4a6a7ee4baa2-0.
INFO 03-01 23:50:05 [logger.py:42] Received request cmpl-406864a3eb114d12845176f296704285-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:05 [async_llm.py:261] Added request cmpl-406864a3eb114d12845176f296704285-0.
INFO 03-01 23:50:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:50:06 [logger.py:42] Received request cmpl-5b4d50de1e9a41b9a2b5869c548f249d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:06 [async_llm.py:261] Added request cmpl-5b4d50de1e9a41b9a2b5869c548f249d-0.
INFO 03-01 23:50:07 [logger.py:42] Received request cmpl-7cdfff9086e9417e81001d9e96747b5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:07 [async_llm.py:261] Added request cmpl-7cdfff9086e9417e81001d9e96747b5f-0.
INFO 03-01 23:50:08 [logger.py:42] Received request cmpl-b013be51a1ef4b38b18936688c824063-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:08 [async_llm.py:261] Added request cmpl-b013be51a1ef4b38b18936688c824063-0.
INFO 03-01 23:50:09 [logger.py:42] Received request cmpl-5bbb672fc2864587a07368b042626098-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:09 [async_llm.py:261] Added request cmpl-5bbb672fc2864587a07368b042626098-0.
INFO 03-01 23:50:10 [logger.py:42] Received request cmpl-5ba3084e9d704642bc0d3155caacb9fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:10 [async_llm.py:261] Added request cmpl-5ba3084e9d704642bc0d3155caacb9fd-0.
INFO 03-01 23:50:11 [logger.py:42] Received request cmpl-bcd68abb59734857b38b3579a2caee12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:11 [async_llm.py:261] Added request cmpl-bcd68abb59734857b38b3579a2caee12-0.
INFO 03-01 23:50:13 [logger.py:42] Received request cmpl-7a8127cd55e74e9ca0da0f2a1f2ab65e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:13 [async_llm.py:261] Added request cmpl-7a8127cd55e74e9ca0da0f2a1f2ab65e-0.
INFO 03-01 23:50:14 [logger.py:42] Received request cmpl-daaa66a40e0f426c92adee2a7d7590ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:14 [async_llm.py:261] Added request cmpl-daaa66a40e0f426c92adee2a7d7590ab-0.
INFO 03-01 23:50:15 [logger.py:42] Received request cmpl-69ed4efbdc95426ea01206ede44a482e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:15 [async_llm.py:261] Added request cmpl-69ed4efbdc95426ea01206ede44a482e-0.
INFO 03-01 23:50:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:50:16 [logger.py:42] Received request cmpl-84b85df91e4e45d18f442f07ee47e8b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:16 [async_llm.py:261] Added request cmpl-84b85df91e4e45d18f442f07ee47e8b5-0.
INFO 03-01 23:50:17 [logger.py:42] Received request cmpl-a9f63fced71a4e8c890fc8dc0cc299a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:17 [async_llm.py:261] Added request cmpl-a9f63fced71a4e8c890fc8dc0cc299a4-0.
INFO 03-01 23:50:18 [logger.py:42] Received request cmpl-b372271be62f4b4790ad4b9740b61831-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:18 [async_llm.py:261] Added request cmpl-b372271be62f4b4790ad4b9740b61831-0.
INFO 03-01 23:50:19 [logger.py:42] Received request cmpl-d49db3942ac24b49831d6901569e8d16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:19 [async_llm.py:261] Added request cmpl-d49db3942ac24b49831d6901569e8d16-0.
INFO 03-01 23:50:20 [logger.py:42] Received request cmpl-6319f8426401412fa0e942082f223a4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:20 [async_llm.py:261] Added request cmpl-6319f8426401412fa0e942082f223a4f-0.
INFO 03-01 23:50:21 [logger.py:42] Received request cmpl-a836e6b9ec474060a8a4cfb4b5421168-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:21 [async_llm.py:261] Added request cmpl-a836e6b9ec474060a8a4cfb4b5421168-0.
INFO 03-01 23:50:22 [logger.py:42] Received request cmpl-314ac12d613a43d3beee1bf0d39dcd6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:22 [async_llm.py:261] Added request cmpl-314ac12d613a43d3beee1bf0d39dcd6d-0.
INFO 03-01 23:50:23 [logger.py:42] Received request cmpl-ce1266f427a54b69b6a596dd717c3760-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:23 [async_llm.py:261] Added request cmpl-ce1266f427a54b69b6a596dd717c3760-0.
INFO 03-01 23:50:24 [logger.py:42] Received request cmpl-04fd13bdb3a84d899fdb267630db7f65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:24 [async_llm.py:261] Added request cmpl-04fd13bdb3a84d899fdb267630db7f65-0.
INFO 03-01 23:50:26 [logger.py:42] Received request cmpl-14fdf86cd3434e70ad4b32d360ac51a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:26 [async_llm.py:261] Added request cmpl-14fdf86cd3434e70ad4b32d360ac51a1-0.
INFO 03-01 23:50:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:50:27 [logger.py:42] Received request cmpl-4b8efdbbf161415087f74a2497b28c02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:27 [async_llm.py:261] Added request cmpl-4b8efdbbf161415087f74a2497b28c02-0.
INFO 03-01 23:50:28 [logger.py:42] Received request cmpl-1598975af9f746048b6f44dc84a32a19-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:28 [async_llm.py:261] Added request cmpl-1598975af9f746048b6f44dc84a32a19-0.
INFO 03-01 23:50:29 [logger.py:42] Received request cmpl-0f9ad3cfd3a84090b913b6fd9b0c487c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:29 [async_llm.py:261] Added request cmpl-0f9ad3cfd3a84090b913b6fd9b0c487c-0.
INFO 03-01 23:50:30 [logger.py:42] Received request cmpl-37656129ca53457bb20aa7855cfee29d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:30 [async_llm.py:261] Added request cmpl-37656129ca53457bb20aa7855cfee29d-0.
INFO 03-01 23:50:31 [logger.py:42] Received request cmpl-b7db3960ae004fa7961482b0fbdf83cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:31 [async_llm.py:261] Added request cmpl-b7db3960ae004fa7961482b0fbdf83cc-0.
INFO 03-01 23:50:32 [logger.py:42] Received request cmpl-73e193641e534b4683b1aca4a1c86fbc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:32 [async_llm.py:261] Added request cmpl-73e193641e534b4683b1aca4a1c86fbc-0.
INFO 03-01 23:50:33 [logger.py:42] Received request cmpl-e6ea8e5390c448f9a951671bb55b9d00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:33 [async_llm.py:261] Added request cmpl-e6ea8e5390c448f9a951671bb55b9d00-0.
INFO 03-01 23:50:34 [logger.py:42] Received request cmpl-669e0bc46d3f4e8794dce20bb761d987-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:34 [async_llm.py:261] Added request cmpl-669e0bc46d3f4e8794dce20bb761d987-0.
INFO 03-01 23:50:35 [logger.py:42] Received request cmpl-edb070130efe410fb453697358aabc71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:35 [async_llm.py:261] Added request cmpl-edb070130efe410fb453697358aabc71-0.
INFO 03-01 23:50:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:50:36 [logger.py:42] Received request cmpl-fcb00fc588db465d90294451056b226a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:36 [async_llm.py:261] Added request cmpl-fcb00fc588db465d90294451056b226a-0.
INFO 03-01 23:50:37 [logger.py:42] Received request cmpl-6c6a0aed532249d1bd53bb2c38595af5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:37 [async_llm.py:261] Added request cmpl-6c6a0aed532249d1bd53bb2c38595af5-0.
INFO 03-01 23:50:39 [logger.py:42] Received request cmpl-4c410fee5a6a4eeb950e63202da95071-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:39 [async_llm.py:261] Added request cmpl-4c410fee5a6a4eeb950e63202da95071-0.
INFO 03-01 23:50:40 [logger.py:42] Received request cmpl-794515cc97554815b069839193cb123e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:40 [async_llm.py:261] Added request cmpl-794515cc97554815b069839193cb123e-0.
INFO 03-01 23:50:41 [logger.py:42] Received request cmpl-69a3027b7b0b47c1a4e4e4b658b2ac40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:41 [async_llm.py:261] Added request cmpl-69a3027b7b0b47c1a4e4e4b658b2ac40-0.
INFO 03-01 23:50:42 [logger.py:42] Received request cmpl-759fc3eb643a47fc9cf7242e0c10a0f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:42 [async_llm.py:261] Added request cmpl-759fc3eb643a47fc9cf7242e0c10a0f7-0.
INFO 03-01 23:50:43 [logger.py:42] Received request cmpl-54d6f7ba8e3b4c198d28a71e2e5da3e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:43 [async_llm.py:261] Added request cmpl-54d6f7ba8e3b4c198d28a71e2e5da3e7-0.
INFO 03-01 23:50:44 [logger.py:42] Received request cmpl-5179e0afd4c648c4adecf4daade8dcef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:44 [async_llm.py:261] Added request cmpl-5179e0afd4c648c4adecf4daade8dcef-0.
INFO 03-01 23:50:45 [logger.py:42] Received request cmpl-6caa02d233994b78aa5016fe966a48e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:45 [async_llm.py:261] Added request cmpl-6caa02d233994b78aa5016fe966a48e7-0.
INFO 03-01 23:50:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:50:46 [logger.py:42] Received request cmpl-11cb650b1d3843f09c52174b2b35fdf1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:46 [async_llm.py:261] Added request cmpl-11cb650b1d3843f09c52174b2b35fdf1-0.
INFO 03-01 23:50:47 [logger.py:42] Received request cmpl-6c2a429b27784f3fa449a4627a6b3643-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:47 [async_llm.py:261] Added request cmpl-6c2a429b27784f3fa449a4627a6b3643-0.
INFO 03-01 23:50:48 [logger.py:42] Received request cmpl-f6b1a79efccc457699e48a1d9fa3e426-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:48 [async_llm.py:261] Added request cmpl-f6b1a79efccc457699e48a1d9fa3e426-0.
INFO 03-01 23:50:49 [logger.py:42] Received request cmpl-4c98ec1fc6024fc6b570224fb23231fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:49 [async_llm.py:261] Added request cmpl-4c98ec1fc6024fc6b570224fb23231fd-0.
INFO 03-01 23:50:50 [logger.py:42] Received request cmpl-7bd34229447944c6a64a2e1fb245b101-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:50 [async_llm.py:261] Added request cmpl-7bd34229447944c6a64a2e1fb245b101-0.
INFO 03-01 23:50:52 [logger.py:42] Received request cmpl-5e26eb1d521d476cbffa1d36ebf56f12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:52 [async_llm.py:261] Added request cmpl-5e26eb1d521d476cbffa1d36ebf56f12-0.
INFO 03-01 23:50:53 [logger.py:42] Received request cmpl-5dbf022eeeb34cba81618f0ca54dee8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:53 [async_llm.py:261] Added request cmpl-5dbf022eeeb34cba81618f0ca54dee8b-0.
INFO 03-01 23:50:54 [logger.py:42] Received request cmpl-a264b13a40484eaea5ccb0097ac3c9d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:54 [async_llm.py:261] Added request cmpl-a264b13a40484eaea5ccb0097ac3c9d2-0.
INFO 03-01 23:50:55 [logger.py:42] Received request cmpl-d203059d4c5e46a58c286637f8e7ecb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:55 [async_llm.py:261] Added request cmpl-d203059d4c5e46a58c286637f8e7ecb9-0.
INFO 03-01 23:50:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:50:56 [logger.py:42] Received request cmpl-1febe69067cb477db42543eb5a377630-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:56 [async_llm.py:261] Added request cmpl-1febe69067cb477db42543eb5a377630-0.
INFO 03-01 23:50:57 [logger.py:42] Received request cmpl-fe424d597aa449708b32b33473433966-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:57 [async_llm.py:261] Added request cmpl-fe424d597aa449708b32b33473433966-0.
INFO 03-01 23:50:58 [logger.py:42] Received request cmpl-1165fa014222479c8a9efb7d288983ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:58 [async_llm.py:261] Added request cmpl-1165fa014222479c8a9efb7d288983ab-0.
INFO 03-01 23:50:59 [logger.py:42] Received request cmpl-8bd9e683f1ee4cfcb355433a28115038-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:59 [async_llm.py:261] Added request cmpl-8bd9e683f1ee4cfcb355433a28115038-0.
INFO 03-01 23:51:00 [logger.py:42] Received request cmpl-ad382273f7634e88a599e47c05d7545a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:00 [async_llm.py:261] Added request cmpl-ad382273f7634e88a599e47c05d7545a-0.
INFO 03-01 23:51:01 [logger.py:42] Received request cmpl-dc84b3ba81ed4c81aae4430e153126ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:01 [async_llm.py:261] Added request cmpl-dc84b3ba81ed4c81aae4430e153126ff-0.
INFO 03-01 23:51:02 [logger.py:42] Received request cmpl-9fb772f4ac354aff9b5b6e291c7b1a8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:02 [async_llm.py:261] Added request cmpl-9fb772f4ac354aff9b5b6e291c7b1a8e-0.
INFO 03-01 23:51:03 [logger.py:42] Received request cmpl-1134e550a3f24e15a2be89513570435a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:03 [async_llm.py:261] Added request cmpl-1134e550a3f24e15a2be89513570435a-0.
INFO 03-01 23:51:05 [logger.py:42] Received request cmpl-49a4fba552eb40938db4ad62539f081a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:05 [async_llm.py:261] Added request cmpl-49a4fba552eb40938db4ad62539f081a-0.
INFO 03-01 23:51:06 [logger.py:42] Received request cmpl-7846ae859762419abfb2ace9cdb7028e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:06 [async_llm.py:261] Added request cmpl-7846ae859762419abfb2ace9cdb7028e-0.
INFO 03-01 23:51:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-01 23:51:07 [logger.py:42] Received request cmpl-2493b4cb26b04b569a5cdc6c0ac455e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:07 [async_llm.py:261] Added request cmpl-2493b4cb26b04b569a5cdc6c0ac455e2-0.
INFO 03-01 23:51:08 [logger.py:42] Received request cmpl-d4b702b266ed4fa6b67ab462dcd98806-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:08 [async_llm.py:261] Added request cmpl-d4b702b266ed4fa6b67ab462dcd98806-0.
INFO 03-01 23:51:09 [logger.py:42] Received request cmpl-ad66a4aca234491994f58acd14e58bf3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:09 [async_llm.py:261] Added request cmpl-ad66a4aca234491994f58acd14e58bf3-0.
INFO 03-01 23:51:10 [logger.py:42] Received request cmpl-f2ffc32cc4eb49349345915913fb8d2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:10 [async_llm.py:261] Added request cmpl-f2ffc32cc4eb49349345915913fb8d2d-0.
INFO 03-01 23:51:11 [logger.py:42] Received request cmpl-6d2c764dfda240a38e2d9ca55f5178f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:11 [async_llm.py:261] Added request cmpl-6d2c764dfda240a38e2d9ca55f5178f1-0.
INFO 03-01 23:51:12 [logger.py:42] Received request cmpl-2eff437bbb724bf0b3adc82ef7bc86c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:12 [async_llm.py:261] Added request cmpl-2eff437bbb724bf0b3adc82ef7bc86c9-0.
INFO 03-01 23:51:13 [logger.py:42] Received request cmpl-bb1be97c40cf4cd7a731196c981c388e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:13 [async_llm.py:261] Added request cmpl-bb1be97c40cf4cd7a731196c981c388e-0.
INFO 03-01 23:51:14 [logger.py:42] Received request cmpl-d3d38592343d49e8b2069f21d1df6774-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:14 [async_llm.py:261] Added request cmpl-d3d38592343d49e8b2069f21d1df6774-0.
INFO 03-01 23:51:15 [logger.py:42] Received request cmpl-5bbd5e5f50574ac097def78207a554c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:15 [async_llm.py:261] Added request cmpl-5bbd5e5f50574ac097def78207a554c2-0.
INFO 03-01 23:51:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:51:16 [logger.py:42] Received request cmpl-cea9d9c6a9e9419b9f8474eb70f50aab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:16 [async_llm.py:261] Added request cmpl-cea9d9c6a9e9419b9f8474eb70f50aab-0.
INFO 03-01 23:51:18 [logger.py:42] Received request cmpl-184816121a4a4d7fbb901ba82f1ae0e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:18 [async_llm.py:261] Added request cmpl-184816121a4a4d7fbb901ba82f1ae0e3-0.
INFO 03-01 23:51:19 [logger.py:42] Received request cmpl-a733945a152a4bd881ab0c4a5f613300-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:19 [async_llm.py:261] Added request cmpl-a733945a152a4bd881ab0c4a5f613300-0.
INFO 03-01 23:51:20 [logger.py:42] Received request cmpl-bd17b7467a834a368b1c17a360082d42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:20 [async_llm.py:261] Added request cmpl-bd17b7467a834a368b1c17a360082d42-0.
INFO 03-01 23:51:21 [logger.py:42] Received request cmpl-7d7576257a784378973e505e593c7731-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:21 [async_llm.py:261] Added request cmpl-7d7576257a784378973e505e593c7731-0.
INFO 03-01 23:51:22 [logger.py:42] Received request cmpl-1fa86c21b7e44932adf73adf4ea07895-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:22 [async_llm.py:261] Added request cmpl-1fa86c21b7e44932adf73adf4ea07895-0.
INFO 03-01 23:51:23 [logger.py:42] Received request cmpl-d80079e7bbf04960b0610ecaa5e2164d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:23 [async_llm.py:261] Added request cmpl-d80079e7bbf04960b0610ecaa5e2164d-0.
INFO 03-01 23:51:24 [logger.py:42] Received request cmpl-0cae81ff733041bab60f8ee6a3a87bba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:24 [async_llm.py:261] Added request cmpl-0cae81ff733041bab60f8ee6a3a87bba-0.
INFO 03-01 23:51:25 [logger.py:42] Received request cmpl-d1251dea02d743a39c13cdb6b6cbf790-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:25 [async_llm.py:261] Added request cmpl-d1251dea02d743a39c13cdb6b6cbf790-0.
INFO 03-01 23:51:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:51:26 [logger.py:42] Received request cmpl-ae6c1dfa54b54aa2857d81234874472e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:26 [async_llm.py:261] Added request cmpl-ae6c1dfa54b54aa2857d81234874472e-0.
INFO 03-01 23:51:27 [logger.py:42] Received request cmpl-5c54a156861e4db897de247df7e233d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:27 [async_llm.py:261] Added request cmpl-5c54a156861e4db897de247df7e233d4-0.
INFO 03-01 23:51:28 [logger.py:42] Received request cmpl-cf7011290a86457493211a4ed3a43457-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:28 [async_llm.py:261] Added request cmpl-cf7011290a86457493211a4ed3a43457-0.
INFO 03-01 23:51:29 [logger.py:42] Received request cmpl-a51173ca179c4be5bb96bd957b7842ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:29 [async_llm.py:261] Added request cmpl-a51173ca179c4be5bb96bd957b7842ec-0.
INFO 03-01 23:51:31 [logger.py:42] Received request cmpl-51a105e97f424710b3c7bbfa31d5965b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:31 [async_llm.py:261] Added request cmpl-51a105e97f424710b3c7bbfa31d5965b-0.
INFO 03-01 23:51:32 [logger.py:42] Received request cmpl-1b227948324f4848ae17bd391e1c7c36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:32 [async_llm.py:261] Added request cmpl-1b227948324f4848ae17bd391e1c7c36-0.
INFO 03-01 23:51:33 [logger.py:42] Received request cmpl-e47a671ed6944f3cb9c0de2674e48c82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:33 [async_llm.py:261] Added request cmpl-e47a671ed6944f3cb9c0de2674e48c82-0.
INFO 03-01 23:51:34 [logger.py:42] Received request cmpl-7586c252d5604e32a52b886163518105-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:34 [async_llm.py:261] Added request cmpl-7586c252d5604e32a52b886163518105-0.
INFO 03-01 23:51:35 [logger.py:42] Received request cmpl-99c0b2e324d84d4ea6fbc0ca06844c1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:35 [async_llm.py:261] Added request cmpl-99c0b2e324d84d4ea6fbc0ca06844c1b-0.
INFO 03-01 23:51:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:51:36 [logger.py:42] Received request cmpl-4c7f7a9384d74e2198ca4a7aa9b90016-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:36 [async_llm.py:261] Added request cmpl-4c7f7a9384d74e2198ca4a7aa9b90016-0.
INFO 03-01 23:51:37 [logger.py:42] Received request cmpl-9475a0ebdb824d9da92906122a4c2799-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:37 [async_llm.py:261] Added request cmpl-9475a0ebdb824d9da92906122a4c2799-0.
INFO 03-01 23:51:38 [logger.py:42] Received request cmpl-4556f339f56a44598812206d91e15e35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:38 [async_llm.py:261] Added request cmpl-4556f339f56a44598812206d91e15e35-0.
INFO 03-01 23:51:39 [logger.py:42] Received request cmpl-674dbdb746b34c9c867d209003a9641b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:39 [async_llm.py:261] Added request cmpl-674dbdb746b34c9c867d209003a9641b-0.
INFO 03-01 23:51:40 [logger.py:42] Received request cmpl-41144981c3614b73b70f6132654e6a07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:40 [async_llm.py:261] Added request cmpl-41144981c3614b73b70f6132654e6a07-0.
INFO 03-01 23:51:41 [logger.py:42] Received request cmpl-df0f416e2483430283b58f6b5f729ed8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:41 [async_llm.py:261] Added request cmpl-df0f416e2483430283b58f6b5f729ed8-0.
INFO 03-01 23:51:42 [logger.py:42] Received request cmpl-93ba4762629f436bb6f440c9753d7e7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:42 [async_llm.py:261] Added request cmpl-93ba4762629f436bb6f440c9753d7e7c-0.
INFO 03-01 23:51:44 [logger.py:42] Received request cmpl-a93fa10f154b4ef8a0428493d764d31a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:44 [async_llm.py:261] Added request cmpl-a93fa10f154b4ef8a0428493d764d31a-0.
INFO 03-01 23:51:45 [logger.py:42] Received request cmpl-015fad62b1c74a37aff59155feea6fc0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:45 [async_llm.py:261] Added request cmpl-015fad62b1c74a37aff59155feea6fc0-0.
INFO 03-01 23:51:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:51:46 [logger.py:42] Received request cmpl-db88c5a03b4649d2878798929d35d9cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:46 [async_llm.py:261] Added request cmpl-db88c5a03b4649d2878798929d35d9cb-0.
INFO 03-01 23:51:47 [logger.py:42] Received request cmpl-39711cf459764f838008976c66c1b416-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:47 [async_llm.py:261] Added request cmpl-39711cf459764f838008976c66c1b416-0.
INFO 03-01 23:51:48 [logger.py:42] Received request cmpl-c3cd76e2e5dc4775866433dedd60a9da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:48 [async_llm.py:261] Added request cmpl-c3cd76e2e5dc4775866433dedd60a9da-0.
INFO 03-01 23:51:49 [logger.py:42] Received request cmpl-79becd75896049febc9a24923a06b8f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:49 [async_llm.py:261] Added request cmpl-79becd75896049febc9a24923a06b8f5-0.
INFO 03-01 23:51:50 [logger.py:42] Received request cmpl-240ced15ea8c4eb797315d613961a0d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:50 [async_llm.py:261] Added request cmpl-240ced15ea8c4eb797315d613961a0d2-0.
INFO 03-01 23:51:51 [logger.py:42] Received request cmpl-7c21220d86ec44ed9f5fc8db4bc21102-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:51 [async_llm.py:261] Added request cmpl-7c21220d86ec44ed9f5fc8db4bc21102-0.
INFO 03-01 23:51:52 [logger.py:42] Received request cmpl-61e99e80bee94dfeb89f0be15e5eafb1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:52 [async_llm.py:261] Added request cmpl-61e99e80bee94dfeb89f0be15e5eafb1-0.
INFO 03-01 23:51:53 [logger.py:42] Received request cmpl-655ec60679af4846bade98d6b0bbc246-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:53 [async_llm.py:261] Added request cmpl-655ec60679af4846bade98d6b0bbc246-0.
INFO 03-01 23:51:54 [logger.py:42] Received request cmpl-a046b7ae305840aa942db443446c3682-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:54 [async_llm.py:261] Added request cmpl-a046b7ae305840aa942db443446c3682-0.
INFO 03-01 23:51:55 [logger.py:42] Received request cmpl-ede0dae0e9474fbaab5499bc60c4dd0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:55 [async_llm.py:261] Added request cmpl-ede0dae0e9474fbaab5499bc60c4dd0d-0.
INFO 03-01 23:51:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:51:57 [logger.py:42] Received request cmpl-3cbb8f84f932499cab25bd3c8a75a3b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:57 [async_llm.py:261] Added request cmpl-3cbb8f84f932499cab25bd3c8a75a3b5-0.
INFO 03-01 23:51:58 [logger.py:42] Received request cmpl-bcb9a49268aa41559dba59ee98dc2e63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:58 [async_llm.py:261] Added request cmpl-bcb9a49268aa41559dba59ee98dc2e63-0.
INFO 03-01 23:51:59 [logger.py:42] Received request cmpl-5069fd9a4ab64bc7b25d4981a338efe4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:59 [async_llm.py:261] Added request cmpl-5069fd9a4ab64bc7b25d4981a338efe4-0.
INFO 03-01 23:52:00 [logger.py:42] Received request cmpl-d80cf10cd9a74bf9b7333f670096001b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:00 [async_llm.py:261] Added request cmpl-d80cf10cd9a74bf9b7333f670096001b-0.
INFO 03-01 23:52:01 [logger.py:42] Received request cmpl-fd149f0126d84428809c889aa2ce859a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:01 [async_llm.py:261] Added request cmpl-fd149f0126d84428809c889aa2ce859a-0.
INFO 03-01 23:52:02 [logger.py:42] Received request cmpl-33f649cb547947ada4805ae38d68d771-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:02 [async_llm.py:261] Added request cmpl-33f649cb547947ada4805ae38d68d771-0.
INFO 03-01 23:52:03 [logger.py:42] Received request cmpl-e49d5fd31e1841df9dfad88667a75fda-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:03 [async_llm.py:261] Added request cmpl-e49d5fd31e1841df9dfad88667a75fda-0.
INFO 03-01 23:52:04 [logger.py:42] Received request cmpl-b7589888e703443084552995885a4702-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:04 [async_llm.py:261] Added request cmpl-b7589888e703443084552995885a4702-0.
INFO 03-01 23:52:05 [logger.py:42] Received request cmpl-06fc388c55be4ed1af7bfff07a7b38fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:05 [async_llm.py:261] Added request cmpl-06fc388c55be4ed1af7bfff07a7b38fd-0.
INFO 03-01 23:52:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:52:06 [logger.py:42] Received request cmpl-6a86cdd6717641c2a1990e5f2974787f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:06 [async_llm.py:261] Added request cmpl-6a86cdd6717641c2a1990e5f2974787f-0.
INFO 03-01 23:52:07 [logger.py:42] Received request cmpl-ecb35170a1464d10b84fc6acbae899df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:07 [async_llm.py:261] Added request cmpl-ecb35170a1464d10b84fc6acbae899df-0.
INFO 03-01 23:52:08 [logger.py:42] Received request cmpl-65be31b580634834a2e3031c0ed64a59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:08 [async_llm.py:261] Added request cmpl-65be31b580634834a2e3031c0ed64a59-0.
INFO 03-01 23:52:10 [logger.py:42] Received request cmpl-be98ed39bae447c59ef0139e0726140e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:10 [async_llm.py:261] Added request cmpl-be98ed39bae447c59ef0139e0726140e-0.
INFO 03-01 23:52:11 [logger.py:42] Received request cmpl-e3a5acf6ba0342a8bdbf8b95b41b3bde-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:11 [async_llm.py:261] Added request cmpl-e3a5acf6ba0342a8bdbf8b95b41b3bde-0.
INFO 03-01 23:52:12 [logger.py:42] Received request cmpl-3cf6305072cb44f9a872505106ba5fda-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:12 [async_llm.py:261] Added request cmpl-3cf6305072cb44f9a872505106ba5fda-0.
INFO 03-01 23:52:13 [logger.py:42] Received request cmpl-ae242dd51d164954a67870d0000c6e29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:13 [async_llm.py:261] Added request cmpl-ae242dd51d164954a67870d0000c6e29-0.
INFO 03-01 23:52:14 [logger.py:42] Received request cmpl-b23b6911ea874ada885ce8e568a1e562-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:14 [async_llm.py:261] Added request cmpl-b23b6911ea874ada885ce8e568a1e562-0.
INFO 03-01 23:52:15 [logger.py:42] Received request cmpl-39a08488a3e74f0d9beddf6cc05c28d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:15 [async_llm.py:261] Added request cmpl-39a08488a3e74f0d9beddf6cc05c28d5-0.
INFO 03-01 23:52:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:52:16 [logger.py:42] Received request cmpl-a37f64e5ab8744448eb2d1ebab15308c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:16 [async_llm.py:261] Added request cmpl-a37f64e5ab8744448eb2d1ebab15308c-0.
INFO 03-01 23:52:17 [logger.py:42] Received request cmpl-353d7fb6a26d45ad99bf44a04011a775-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:17 [async_llm.py:261] Added request cmpl-353d7fb6a26d45ad99bf44a04011a775-0.
INFO 03-01 23:52:18 [logger.py:42] Received request cmpl-766f1eb89d194dfaaf5a5ef04e237379-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:18 [async_llm.py:261] Added request cmpl-766f1eb89d194dfaaf5a5ef04e237379-0.
INFO 03-01 23:52:19 [logger.py:42] Received request cmpl-02388a6aaa734fcab6fe9bbdfbc75bff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:19 [async_llm.py:261] Added request cmpl-02388a6aaa734fcab6fe9bbdfbc75bff-0.
INFO 03-01 23:52:20 [logger.py:42] Received request cmpl-e4565a1b986b425d802149a83c718f75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:20 [async_llm.py:261] Added request cmpl-e4565a1b986b425d802149a83c718f75-0.
INFO 03-01 23:52:21 [logger.py:42] Received request cmpl-b454a719e91f4b59b0d3aea3beda9797-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:21 [async_llm.py:261] Added request cmpl-b454a719e91f4b59b0d3aea3beda9797-0.
INFO 03-01 23:52:23 [logger.py:42] Received request cmpl-475790f10b5a4c659be4eb8759b5e212-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:23 [async_llm.py:261] Added request cmpl-475790f10b5a4c659be4eb8759b5e212-0.
INFO 03-01 23:52:24 [logger.py:42] Received request cmpl-f52d336a394a4d2c83af952e398f3cfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:24 [async_llm.py:261] Added request cmpl-f52d336a394a4d2c83af952e398f3cfa-0.
INFO 03-01 23:52:25 [logger.py:42] Received request cmpl-ddbe6a26c5714458a1b30b3df9aa32eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:25 [async_llm.py:261] Added request cmpl-ddbe6a26c5714458a1b30b3df9aa32eb-0.
INFO 03-01 23:52:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:52:26 [logger.py:42] Received request cmpl-c17aa584423f4afabe3fa39dc54949d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:26 [async_llm.py:261] Added request cmpl-c17aa584423f4afabe3fa39dc54949d8-0.
INFO 03-01 23:52:27 [logger.py:42] Received request cmpl-8ca3bbd2e8a648d0a04133c566f7720f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:27 [async_llm.py:261] Added request cmpl-8ca3bbd2e8a648d0a04133c566f7720f-0.
INFO 03-01 23:52:28 [logger.py:42] Received request cmpl-134e2ff146cb45e7bcbdc3ce57f76029-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:28 [async_llm.py:261] Added request cmpl-134e2ff146cb45e7bcbdc3ce57f76029-0.
INFO 03-01 23:52:29 [logger.py:42] Received request cmpl-283816e50b2e4109a169f3581ca9d197-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:29 [async_llm.py:261] Added request cmpl-283816e50b2e4109a169f3581ca9d197-0.
INFO 03-01 23:52:30 [logger.py:42] Received request cmpl-0ab3ebb1c2e34b7f82cb182eb084135e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:30 [async_llm.py:261] Added request cmpl-0ab3ebb1c2e34b7f82cb182eb084135e-0.
INFO 03-01 23:52:31 [logger.py:42] Received request cmpl-37a4fa3709474150bcd014af1dfda74e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:31 [async_llm.py:261] Added request cmpl-37a4fa3709474150bcd014af1dfda74e-0.
INFO 03-01 23:52:32 [logger.py:42] Received request cmpl-2df2786db4344915805117461c5b0967-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:32 [async_llm.py:261] Added request cmpl-2df2786db4344915805117461c5b0967-0.
INFO 03-01 23:52:33 [logger.py:42] Received request cmpl-07c393e6202f4117b0bf65fcbad63632-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:33 [async_llm.py:261] Added request cmpl-07c393e6202f4117b0bf65fcbad63632-0.
INFO 03-01 23:52:34 [logger.py:42] Received request cmpl-23cd019059b04069990b370388b83b1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:34 [async_llm.py:261] Added request cmpl-23cd019059b04069990b370388b83b1c-0.
INFO 03-01 23:52:36 [logger.py:42] Received request cmpl-7285c5d264a949aeb1ba465deec04cbc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:36 [async_llm.py:261] Added request cmpl-7285c5d264a949aeb1ba465deec04cbc-0.
INFO 03-01 23:52:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:52:37 [logger.py:42] Received request cmpl-8441c7d3d74b45a2a398d3a37d1a223d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:37 [async_llm.py:261] Added request cmpl-8441c7d3d74b45a2a398d3a37d1a223d-0.
INFO 03-01 23:52:38 [logger.py:42] Received request cmpl-1e6ed8b45fdf417b996082ebf365cfb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:38 [async_llm.py:261] Added request cmpl-1e6ed8b45fdf417b996082ebf365cfb3-0.
INFO 03-01 23:52:39 [logger.py:42] Received request cmpl-b04bb0e276b248a48b88aef12136be1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:39 [async_llm.py:261] Added request cmpl-b04bb0e276b248a48b88aef12136be1e-0.
INFO 03-01 23:52:40 [logger.py:42] Received request cmpl-71f06df95288435aac68ba9c34b693d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:40 [async_llm.py:261] Added request cmpl-71f06df95288435aac68ba9c34b693d5-0.
INFO 03-01 23:52:41 [logger.py:42] Received request cmpl-6ccf648e853c47bc905f625a9dc812e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:41 [async_llm.py:261] Added request cmpl-6ccf648e853c47bc905f625a9dc812e1-0.
INFO 03-01 23:52:42 [logger.py:42] Received request cmpl-6d3901cd6baf4783aa05270775dd5217-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:42 [async_llm.py:261] Added request cmpl-6d3901cd6baf4783aa05270775dd5217-0.
INFO 03-01 23:52:43 [logger.py:42] Received request cmpl-bfeac5dba99940ff96eaefbf9e94a8f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:43 [async_llm.py:261] Added request cmpl-bfeac5dba99940ff96eaefbf9e94a8f2-0.
INFO 03-01 23:52:44 [logger.py:42] Received request cmpl-5affea2263d846ed97cd36487edc571c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:44 [async_llm.py:261] Added request cmpl-5affea2263d846ed97cd36487edc571c-0.
INFO 03-01 23:52:45 [logger.py:42] Received request cmpl-cd2094755e344f77bce26ea4547a642d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:45 [async_llm.py:261] Added request cmpl-cd2094755e344f77bce26ea4547a642d-0.
INFO 03-01 23:52:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:52:46 [logger.py:42] Received request cmpl-46e010ab24dc4120a8ffb04239bbe0f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:46 [async_llm.py:261] Added request cmpl-46e010ab24dc4120a8ffb04239bbe0f8-0.
INFO 03-01 23:52:47 [logger.py:42] Received request cmpl-2498268ce9774fd5b3bcfa5fb51dea58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:47 [async_llm.py:261] Added request cmpl-2498268ce9774fd5b3bcfa5fb51dea58-0.
INFO 03-01 23:52:49 [logger.py:42] Received request cmpl-92c7a0f3cc214eb89c70a28567e02eb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:49 [async_llm.py:261] Added request cmpl-92c7a0f3cc214eb89c70a28567e02eb2-0.
INFO 03-01 23:52:50 [logger.py:42] Received request cmpl-da048bfd629c4b0ab2c808da2c0c88d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:50 [async_llm.py:261] Added request cmpl-da048bfd629c4b0ab2c808da2c0c88d4-0.
INFO 03-01 23:52:51 [logger.py:42] Received request cmpl-7ffbc60187d14e8aacc1dfe13b227adf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:51 [async_llm.py:261] Added request cmpl-7ffbc60187d14e8aacc1dfe13b227adf-0.
INFO 03-01 23:52:52 [logger.py:42] Received request cmpl-048dd88051b6460ab5352f951e67c864-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:52 [async_llm.py:261] Added request cmpl-048dd88051b6460ab5352f951e67c864-0.
INFO 03-01 23:52:53 [logger.py:42] Received request cmpl-7ebf66bff1c740e2bbb6cf023b1f9d46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:53 [async_llm.py:261] Added request cmpl-7ebf66bff1c740e2bbb6cf023b1f9d46-0.
INFO 03-01 23:52:54 [logger.py:42] Received request cmpl-62d4cc4f34cc4b0890e0ead5c887f6c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:54 [async_llm.py:261] Added request cmpl-62d4cc4f34cc4b0890e0ead5c887f6c6-0.
INFO 03-01 23:52:55 [logger.py:42] Received request cmpl-af07a6e8a7c24f918f4374ea4d00c3dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:55 [async_llm.py:261] Added request cmpl-af07a6e8a7c24f918f4374ea4d00c3dc-0.
INFO 03-01 23:52:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:52:56 [logger.py:42] Received request cmpl-113f607e54644ac99292dd6cae1539d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:56 [async_llm.py:261] Added request cmpl-113f607e54644ac99292dd6cae1539d5-0.
INFO 03-01 23:52:57 [logger.py:42] Received request cmpl-c601bd2d7a4647e6867496d13a249e03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:57 [async_llm.py:261] Added request cmpl-c601bd2d7a4647e6867496d13a249e03-0.
INFO 03-01 23:52:58 [logger.py:42] Received request cmpl-9ce6df6e794a410ba116e4d96ba153d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:58 [async_llm.py:261] Added request cmpl-9ce6df6e794a410ba116e4d96ba153d8-0.
INFO 03-01 23:52:59 [logger.py:42] Received request cmpl-ff1f53bd781a4d2eb300d597f3379392-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:59 [async_llm.py:261] Added request cmpl-ff1f53bd781a4d2eb300d597f3379392-0.
INFO 03-01 23:53:00 [logger.py:42] Received request cmpl-6fd0ad5cdb7343fd8921ac87bb4f3f94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:00 [async_llm.py:261] Added request cmpl-6fd0ad5cdb7343fd8921ac87bb4f3f94-0.
INFO 03-01 23:53:02 [logger.py:42] Received request cmpl-9359bbcec27d40a68f0837896bdc90fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:02 [async_llm.py:261] Added request cmpl-9359bbcec27d40a68f0837896bdc90fe-0.
INFO 03-01 23:53:03 [logger.py:42] Received request cmpl-78d2e24b298b41ccbb222cb76c6a4cd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:03 [async_llm.py:261] Added request cmpl-78d2e24b298b41ccbb222cb76c6a4cd8-0.
INFO 03-01 23:53:04 [logger.py:42] Received request cmpl-5524f4a71f0049a59215cab53cf97cf5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:04 [async_llm.py:261] Added request cmpl-5524f4a71f0049a59215cab53cf97cf5-0.
INFO 03-01 23:53:05 [logger.py:42] Received request cmpl-e3d92650408a4aeea729ce03c823a71d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:05 [async_llm.py:261] Added request cmpl-e3d92650408a4aeea729ce03c823a71d-0.
INFO 03-01 23:53:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:53:06 [logger.py:42] Received request cmpl-152dd371ea864423a01809e2ccdb15f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:06 [async_llm.py:261] Added request cmpl-152dd371ea864423a01809e2ccdb15f7-0.
INFO 03-01 23:53:07 [logger.py:42] Received request cmpl-977b90feb3a84677b5a36acaa237b86a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:07 [async_llm.py:261] Added request cmpl-977b90feb3a84677b5a36acaa237b86a-0.
INFO 03-01 23:53:08 [logger.py:42] Received request cmpl-b9fbe67689334bf3ac844e8ccaff44c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:08 [async_llm.py:261] Added request cmpl-b9fbe67689334bf3ac844e8ccaff44c1-0.
INFO 03-01 23:53:09 [logger.py:42] Received request cmpl-3b799cddce764c34a8f676c9ddcc92e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:09 [async_llm.py:261] Added request cmpl-3b799cddce764c34a8f676c9ddcc92e1-0.
INFO 03-01 23:53:10 [logger.py:42] Received request cmpl-0ebcc3e15c6448a78048e2f44dd6a383-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:10 [async_llm.py:261] Added request cmpl-0ebcc3e15c6448a78048e2f44dd6a383-0.
INFO 03-01 23:53:11 [logger.py:42] Received request cmpl-1afc38beba9a498e961516a72bba3b56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:11 [async_llm.py:261] Added request cmpl-1afc38beba9a498e961516a72bba3b56-0.
INFO 03-01 23:53:12 [logger.py:42] Received request cmpl-5c1a7248e5d140afbfdce703b4e5a1c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:12 [async_llm.py:261] Added request cmpl-5c1a7248e5d140afbfdce703b4e5a1c5-0.
INFO 03-01 23:53:13 [logger.py:42] Received request cmpl-8629a570329f49438bf60baec01f4c9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:14 [async_llm.py:261] Added request cmpl-8629a570329f49438bf60baec01f4c9c-0.
INFO 03-01 23:53:15 [logger.py:42] Received request cmpl-4b818c4bcb994a61a94bd3d92f0fade1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:15 [async_llm.py:261] Added request cmpl-4b818c4bcb994a61a94bd3d92f0fade1-0.
INFO 03-01 23:53:16 [logger.py:42] Received request cmpl-abf56e8ac14c455b8b6ff1b499706c61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:16 [async_llm.py:261] Added request cmpl-abf56e8ac14c455b8b6ff1b499706c61-0.
INFO 03-01 23:53:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-01 23:53:17 [logger.py:42] Received request cmpl-cff34e2a186f4f76985b26414f881f7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:17 [async_llm.py:261] Added request cmpl-cff34e2a186f4f76985b26414f881f7f-0.
INFO 03-01 23:53:18 [logger.py:42] Received request cmpl-2894ff85d38142b095bfe3d860596c38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:18 [async_llm.py:261] Added request cmpl-2894ff85d38142b095bfe3d860596c38-0.
INFO 03-01 23:53:19 [logger.py:42] Received request cmpl-8eb16b591ee546c39451bcf79f4c2115-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:19 [async_llm.py:261] Added request cmpl-8eb16b591ee546c39451bcf79f4c2115-0.
INFO 03-01 23:53:20 [logger.py:42] Received request cmpl-feb7ce0116bb42ee9532665c3ebee853-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:20 [async_llm.py:261] Added request cmpl-feb7ce0116bb42ee9532665c3ebee853-0.
INFO 03-01 23:53:21 [logger.py:42] Received request cmpl-5bce934513344485bf35c61e44a757fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:21 [async_llm.py:261] Added request cmpl-5bce934513344485bf35c61e44a757fa-0.
INFO 03-01 23:53:22 [logger.py:42] Received request cmpl-66020a5fe62f428e9de81ddf2db15205-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:22 [async_llm.py:261] Added request cmpl-66020a5fe62f428e9de81ddf2db15205-0.
INFO 03-01 23:53:23 [logger.py:42] Received request cmpl-3712d50c9d804572a44010f1658fa8ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:23 [async_llm.py:261] Added request cmpl-3712d50c9d804572a44010f1658fa8ed-0.
INFO 03-01 23:53:24 [logger.py:42] Received request cmpl-1cfb9b67817e49c6a784a26c8276bf79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:24 [async_llm.py:261] Added request cmpl-1cfb9b67817e49c6a784a26c8276bf79-0.
INFO 03-01 23:53:25 [logger.py:42] Received request cmpl-86d0724d7f444cdc9939e00ba99f40c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:25 [async_llm.py:261] Added request cmpl-86d0724d7f444cdc9939e00ba99f40c1-0.
INFO 03-01 23:53:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:53:26 [logger.py:42] Received request cmpl-e1876975951d4da4814fd10691a67247-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:26 [async_llm.py:261] Added request cmpl-e1876975951d4da4814fd10691a67247-0.
INFO 03-01 23:53:28 [logger.py:42] Received request cmpl-bcad9c79049843f599f06c0bcff8deb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:28 [async_llm.py:261] Added request cmpl-bcad9c79049843f599f06c0bcff8deb2-0.
INFO 03-01 23:53:29 [logger.py:42] Received request cmpl-b35286fd7e2a45b79f948352ed242cc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:29 [async_llm.py:261] Added request cmpl-b35286fd7e2a45b79f948352ed242cc9-0.
INFO 03-01 23:53:30 [logger.py:42] Received request cmpl-50c2e68c5ad84486be37aa2e795f36af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:30 [async_llm.py:261] Added request cmpl-50c2e68c5ad84486be37aa2e795f36af-0.
INFO 03-01 23:53:31 [logger.py:42] Received request cmpl-014d38b75c9c4152b2b3e7238790c753-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:31 [async_llm.py:261] Added request cmpl-014d38b75c9c4152b2b3e7238790c753-0.
INFO 03-01 23:53:32 [logger.py:42] Received request cmpl-fbb34461bdf24fc3bfe12404a7ba32f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:32 [async_llm.py:261] Added request cmpl-fbb34461bdf24fc3bfe12404a7ba32f6-0.
INFO 03-01 23:53:33 [logger.py:42] Received request cmpl-e8e55a4a730f4681a56e428fabe824ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:33 [async_llm.py:261] Added request cmpl-e8e55a4a730f4681a56e428fabe824ac-0.
INFO 03-01 23:53:34 [logger.py:42] Received request cmpl-bb39e92185bb4a22a7f9865ab6401e6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:34 [async_llm.py:261] Added request cmpl-bb39e92185bb4a22a7f9865ab6401e6b-0.
INFO 03-01 23:53:35 [logger.py:42] Received request cmpl-debaf2e566314eac96e00c511c5c86f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:35 [async_llm.py:261] Added request cmpl-debaf2e566314eac96e00c511c5c86f0-0.
INFO 03-01 23:53:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:53:36 [logger.py:42] Received request cmpl-f48faed59c584238867444e14df5a233-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:36 [async_llm.py:261] Added request cmpl-f48faed59c584238867444e14df5a233-0.
INFO 03-01 23:53:37 [logger.py:42] Received request cmpl-06571b5f75f14c54a506c2346d5bf8b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:37 [async_llm.py:261] Added request cmpl-06571b5f75f14c54a506c2346d5bf8b9-0.
INFO 03-01 23:53:38 [logger.py:42] Received request cmpl-d09e9ecf332a476d82101a17653e5198-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:38 [async_llm.py:261] Added request cmpl-d09e9ecf332a476d82101a17653e5198-0.
INFO 03-01 23:53:39 [logger.py:42] Received request cmpl-3ca20d553c5e4d0b8392fb0fe86de1fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:39 [async_llm.py:261] Added request cmpl-3ca20d553c5e4d0b8392fb0fe86de1fc-0.
INFO 03-01 23:53:41 [logger.py:42] Received request cmpl-e90ec3d5a8814e5f99020dcc2134488b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:41 [async_llm.py:261] Added request cmpl-e90ec3d5a8814e5f99020dcc2134488b-0.
INFO 03-01 23:53:42 [logger.py:42] Received request cmpl-76aca0b4dd2c4fccb21a3ab0de156139-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:42 [async_llm.py:261] Added request cmpl-76aca0b4dd2c4fccb21a3ab0de156139-0.
INFO 03-01 23:53:43 [logger.py:42] Received request cmpl-f0d37b283cf340d0b41c6d2110c6631b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:43 [async_llm.py:261] Added request cmpl-f0d37b283cf340d0b41c6d2110c6631b-0.
INFO 03-01 23:53:44 [logger.py:42] Received request cmpl-8e9d8379c2f84730b439a004be17f1fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:44 [async_llm.py:261] Added request cmpl-8e9d8379c2f84730b439a004be17f1fb-0.
INFO 03-01 23:53:45 [logger.py:42] Received request cmpl-4ee5f579e3ea402a9869aad3988769d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:45 [async_llm.py:261] Added request cmpl-4ee5f579e3ea402a9869aad3988769d9-0.
INFO 03-01 23:53:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:53:46 [logger.py:42] Received request cmpl-11aa314326b6436eb9fd2eb99932f75a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:46 [async_llm.py:261] Added request cmpl-11aa314326b6436eb9fd2eb99932f75a-0.
INFO 03-01 23:53:47 [logger.py:42] Received request cmpl-1f7f2b27e6a04ee4915e26bb7c8b4ccd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:47 [async_llm.py:261] Added request cmpl-1f7f2b27e6a04ee4915e26bb7c8b4ccd-0.
INFO 03-01 23:53:48 [logger.py:42] Received request cmpl-71f7897165024bbb8b48d655b885595e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:48 [async_llm.py:261] Added request cmpl-71f7897165024bbb8b48d655b885595e-0.
INFO 03-01 23:53:49 [logger.py:42] Received request cmpl-1cc1559868f147a4857c2cfb2afd5654-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:49 [async_llm.py:261] Added request cmpl-1cc1559868f147a4857c2cfb2afd5654-0.
INFO 03-01 23:53:50 [logger.py:42] Received request cmpl-652e1ea887974db18685a705e7135d89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:50 [async_llm.py:261] Added request cmpl-652e1ea887974db18685a705e7135d89-0.
INFO 03-01 23:53:51 [logger.py:42] Received request cmpl-2b8a75e1452b4e7f9d89b094d278fddf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:51 [async_llm.py:261] Added request cmpl-2b8a75e1452b4e7f9d89b094d278fddf-0.
INFO 03-01 23:53:52 [logger.py:42] Received request cmpl-6867a05bab534bdbb17dd333fe05ad18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:52 [async_llm.py:261] Added request cmpl-6867a05bab534bdbb17dd333fe05ad18-0.
INFO 03-01 23:53:54 [logger.py:42] Received request cmpl-9ae02af5708f4d86aa1f5ac0b955cc17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:54 [async_llm.py:261] Added request cmpl-9ae02af5708f4d86aa1f5ac0b955cc17-0.
INFO 03-01 23:53:55 [logger.py:42] Received request cmpl-bbe8f9989b2a4ebe86fe9be16b6a33c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:55 [async_llm.py:261] Added request cmpl-bbe8f9989b2a4ebe86fe9be16b6a33c9-0.
INFO 03-01 23:53:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:53:56 [logger.py:42] Received request cmpl-e2519f6f8a1543d0992127f93b428219-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:56 [async_llm.py:261] Added request cmpl-e2519f6f8a1543d0992127f93b428219-0.
INFO 03-01 23:53:57 [logger.py:42] Received request cmpl-b272b61c5259443aad46a59ba1b0fefa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:57 [async_llm.py:261] Added request cmpl-b272b61c5259443aad46a59ba1b0fefa-0.
INFO 03-01 23:53:58 [logger.py:42] Received request cmpl-d8094e60322946d18954d51a4fe89403-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:58 [async_llm.py:261] Added request cmpl-d8094e60322946d18954d51a4fe89403-0.
INFO 03-01 23:53:59 [logger.py:42] Received request cmpl-085d452591914242a575f06a013c10e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:59 [async_llm.py:261] Added request cmpl-085d452591914242a575f06a013c10e1-0.
INFO 03-01 23:54:00 [logger.py:42] Received request cmpl-359b30d6eeca43a086aa21f891cf9953-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:00 [async_llm.py:261] Added request cmpl-359b30d6eeca43a086aa21f891cf9953-0.
INFO 03-01 23:54:01 [logger.py:42] Received request cmpl-2eede99501da4c8a8ba3ec9417bc2f3e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:01 [async_llm.py:261] Added request cmpl-2eede99501da4c8a8ba3ec9417bc2f3e-0.
INFO 03-01 23:54:02 [logger.py:42] Received request cmpl-39dfd6c6f5954ab49fcb23c479ebe613-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:02 [async_llm.py:261] Added request cmpl-39dfd6c6f5954ab49fcb23c479ebe613-0.
INFO 03-01 23:54:03 [logger.py:42] Received request cmpl-6d9b9d2ccbe241cbacb6b5922877c34f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:03 [async_llm.py:261] Added request cmpl-6d9b9d2ccbe241cbacb6b5922877c34f-0.
INFO 03-01 23:54:04 [logger.py:42] Received request cmpl-1819d004cd6842efb7e8ab80a9f6645d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:04 [async_llm.py:261] Added request cmpl-1819d004cd6842efb7e8ab80a9f6645d-0.
INFO 03-01 23:54:05 [logger.py:42] Received request cmpl-cfae834d305b466fad9a69b5e926bcab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:05 [async_llm.py:261] Added request cmpl-cfae834d305b466fad9a69b5e926bcab-0.
INFO 03-01 23:54:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:54:07 [logger.py:42] Received request cmpl-f8c78573c628442dad65c6d3e391daba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:07 [async_llm.py:261] Added request cmpl-f8c78573c628442dad65c6d3e391daba-0.
INFO 03-01 23:54:08 [logger.py:42] Received request cmpl-6f43ac375d554e42b5192cc9e00014f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:08 [async_llm.py:261] Added request cmpl-6f43ac375d554e42b5192cc9e00014f7-0.
INFO 03-01 23:54:09 [logger.py:42] Received request cmpl-b91212b73d4f46a086c83e41a65c8c2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:09 [async_llm.py:261] Added request cmpl-b91212b73d4f46a086c83e41a65c8c2c-0.
INFO 03-01 23:54:10 [logger.py:42] Received request cmpl-c0f2a586003545e0beb6191c90abb6a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:10 [async_llm.py:261] Added request cmpl-c0f2a586003545e0beb6191c90abb6a4-0.
INFO 03-01 23:54:11 [logger.py:42] Received request cmpl-861389141a774477a753b087f62e303a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:11 [async_llm.py:261] Added request cmpl-861389141a774477a753b087f62e303a-0.
INFO 03-01 23:54:12 [logger.py:42] Received request cmpl-40810df46fce47808f6d0ef8c65658e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:12 [async_llm.py:261] Added request cmpl-40810df46fce47808f6d0ef8c65658e1-0.
INFO 03-01 23:54:13 [logger.py:42] Received request cmpl-75ba7cf820244e719d8514fc2dce6b75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:13 [async_llm.py:261] Added request cmpl-75ba7cf820244e719d8514fc2dce6b75-0.
INFO 03-01 23:54:14 [logger.py:42] Received request cmpl-e18d1daf5c1d4230af4f790c2aac4d5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:14 [async_llm.py:261] Added request cmpl-e18d1daf5c1d4230af4f790c2aac4d5a-0.
INFO 03-01 23:54:15 [logger.py:42] Received request cmpl-2010a8f257db45c4b5d49b5cc025df4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:15 [async_llm.py:261] Added request cmpl-2010a8f257db45c4b5d49b5cc025df4d-0.
INFO 03-01 23:54:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:54:16 [logger.py:42] Received request cmpl-61d43dc8ea7343b9a7c6da2d8252db27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:16 [async_llm.py:261] Added request cmpl-61d43dc8ea7343b9a7c6da2d8252db27-0.
INFO 03-01 23:54:17 [logger.py:42] Received request cmpl-aaeed69cb77e4751a6efc0f06a2b7d6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:17 [async_llm.py:261] Added request cmpl-aaeed69cb77e4751a6efc0f06a2b7d6d-0.
INFO 03-01 23:54:18 [logger.py:42] Received request cmpl-d82da113a3214b88985b2d87989e8a8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:18 [async_llm.py:261] Added request cmpl-d82da113a3214b88985b2d87989e8a8d-0.
INFO 03-01 23:54:20 [logger.py:42] Received request cmpl-c2293cfc223f49708d1304f1a2bedfa0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:20 [async_llm.py:261] Added request cmpl-c2293cfc223f49708d1304f1a2bedfa0-0.
INFO 03-01 23:54:21 [logger.py:42] Received request cmpl-bf0d9c24dbcb4ef1af02766592847672-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:21 [async_llm.py:261] Added request cmpl-bf0d9c24dbcb4ef1af02766592847672-0.
INFO 03-01 23:54:22 [logger.py:42] Received request cmpl-11c4be85bdbf41e2a2c2053c197034e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:22 [async_llm.py:261] Added request cmpl-11c4be85bdbf41e2a2c2053c197034e1-0.
INFO 03-01 23:54:23 [logger.py:42] Received request cmpl-ed783c076d404638a7f7d8f6731e540c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:23 [async_llm.py:261] Added request cmpl-ed783c076d404638a7f7d8f6731e540c-0.
INFO 03-01 23:54:24 [logger.py:42] Received request cmpl-49ebbf032b954852b584e487a65b895b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:24 [async_llm.py:261] Added request cmpl-49ebbf032b954852b584e487a65b895b-0.
INFO 03-01 23:54:25 [logger.py:42] Received request cmpl-52bdb8ad47fe4dda85acce3f8d177ea8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:25 [async_llm.py:261] Added request cmpl-52bdb8ad47fe4dda85acce3f8d177ea8-0.
INFO 03-01 23:54:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:54:26 [logger.py:42] Received request cmpl-ba1ec86e843047c0beb41bcb6817f7e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:26 [async_llm.py:261] Added request cmpl-ba1ec86e843047c0beb41bcb6817f7e1-0.
INFO 03-01 23:54:27 [logger.py:42] Received request cmpl-078028fb571849cb94ed722139018c77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:27 [async_llm.py:261] Added request cmpl-078028fb571849cb94ed722139018c77-0.
INFO 03-01 23:54:28 [logger.py:42] Received request cmpl-1c2bce9f052b4b9f8565d6fcf1ce7343-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:28 [async_llm.py:261] Added request cmpl-1c2bce9f052b4b9f8565d6fcf1ce7343-0.
INFO 03-01 23:54:29 [logger.py:42] Received request cmpl-9d1153dc1493450387e7b834d3731115-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:29 [async_llm.py:261] Added request cmpl-9d1153dc1493450387e7b834d3731115-0.
INFO 03-01 23:54:30 [logger.py:42] Received request cmpl-367eced76de646a18959cf75ff6a65f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:30 [async_llm.py:261] Added request cmpl-367eced76de646a18959cf75ff6a65f8-0.
INFO 03-01 23:54:32 [logger.py:42] Received request cmpl-f6d01e5d34ec46478407ad63f50c6d18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:32 [async_llm.py:261] Added request cmpl-f6d01e5d34ec46478407ad63f50c6d18-0.
INFO 03-01 23:54:33 [logger.py:42] Received request cmpl-cc02080f753a48aea7f5db17e1572b75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:33 [async_llm.py:261] Added request cmpl-cc02080f753a48aea7f5db17e1572b75-0.
INFO 03-01 23:54:34 [logger.py:42] Received request cmpl-c980f8b892b146f5b4d3cbff10a84a66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:34 [async_llm.py:261] Added request cmpl-c980f8b892b146f5b4d3cbff10a84a66-0.
INFO 03-01 23:54:35 [logger.py:42] Received request cmpl-e11a3c226a8b4709aeffb380b0fcde83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:35 [async_llm.py:261] Added request cmpl-e11a3c226a8b4709aeffb380b0fcde83-0.
INFO 03-01 23:54:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:54:36 [logger.py:42] Received request cmpl-90ac1a830b0748938e53a02cd23f82ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:36 [async_llm.py:261] Added request cmpl-90ac1a830b0748938e53a02cd23f82ed-0.
INFO 03-01 23:54:37 [logger.py:42] Received request cmpl-a96ba26a6d2b4e73b93a1a71be42f84b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:37 [async_llm.py:261] Added request cmpl-a96ba26a6d2b4e73b93a1a71be42f84b-0.
INFO 03-01 23:54:38 [logger.py:42] Received request cmpl-e03322a50eaa4354bc4152089ec6253c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:38 [async_llm.py:261] Added request cmpl-e03322a50eaa4354bc4152089ec6253c-0.
INFO 03-01 23:54:39 [logger.py:42] Received request cmpl-1186a73395ca4c98b3c5114e41316f98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:39 [async_llm.py:261] Added request cmpl-1186a73395ca4c98b3c5114e41316f98-0.
INFO 03-01 23:54:40 [logger.py:42] Received request cmpl-0d5cf43d6b5d4660945e0f19cd5a1faa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:40 [async_llm.py:261] Added request cmpl-0d5cf43d6b5d4660945e0f19cd5a1faa-0.
INFO 03-01 23:54:41 [logger.py:42] Received request cmpl-3e441a850353485e994d8309a344586f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:41 [async_llm.py:261] Added request cmpl-3e441a850353485e994d8309a344586f-0.
INFO 03-01 23:54:42 [logger.py:42] Received request cmpl-158b55177e9e436dac7d79cb2154afcf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:42 [async_llm.py:261] Added request cmpl-158b55177e9e436dac7d79cb2154afcf-0.
INFO 03-01 23:54:43 [logger.py:42] Received request cmpl-3edfef1cfd524a179a6eead7056f907d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:43 [async_llm.py:261] Added request cmpl-3edfef1cfd524a179a6eead7056f907d-0.
INFO 03-01 23:54:44 [logger.py:42] Received request cmpl-74751fb2f1a7487fb563156760121b7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:44 [async_llm.py:261] Added request cmpl-74751fb2f1a7487fb563156760121b7d-0.
INFO 03-01 23:54:46 [logger.py:42] Received request cmpl-e1785463a4784324877533ed2e678b10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:46 [async_llm.py:261] Added request cmpl-e1785463a4784324877533ed2e678b10-0.
INFO 03-01 23:54:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:54:47 [logger.py:42] Received request cmpl-1685a80bdd85484ab289a9dcc7b0c406-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:47 [async_llm.py:261] Added request cmpl-1685a80bdd85484ab289a9dcc7b0c406-0.
INFO 03-01 23:54:48 [logger.py:42] Received request cmpl-3008c9f82dc3485a936929b46b8639dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:48 [async_llm.py:261] Added request cmpl-3008c9f82dc3485a936929b46b8639dc-0.
INFO 03-01 23:54:49 [logger.py:42] Received request cmpl-9376a267d97043c1a74bc073ce12e3b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:49 [async_llm.py:261] Added request cmpl-9376a267d97043c1a74bc073ce12e3b8-0.
INFO 03-01 23:54:50 [logger.py:42] Received request cmpl-86d4344ce573456da1310781c6616ef6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:50 [async_llm.py:261] Added request cmpl-86d4344ce573456da1310781c6616ef6-0.
INFO 03-01 23:54:51 [logger.py:42] Received request cmpl-991df62ea5d540769e6760d0806245a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:51 [async_llm.py:261] Added request cmpl-991df62ea5d540769e6760d0806245a1-0.
INFO 03-01 23:54:52 [logger.py:42] Received request cmpl-1ef78a05a0df46af8d92ffbaad752384-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:52 [async_llm.py:261] Added request cmpl-1ef78a05a0df46af8d92ffbaad752384-0.
INFO 03-01 23:54:53 [logger.py:42] Received request cmpl-f62ab9da98994f7dba4dd6758812a90c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:53 [async_llm.py:261] Added request cmpl-f62ab9da98994f7dba4dd6758812a90c-0.
INFO 03-01 23:54:54 [logger.py:42] Received request cmpl-5a3fc84c50314c97a5deead0f77bbc8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:54 [async_llm.py:261] Added request cmpl-5a3fc84c50314c97a5deead0f77bbc8a-0.
INFO 03-01 23:54:55 [logger.py:42] Received request cmpl-4e4331c6dc924272a2648cf7ea24712c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:55 [async_llm.py:261] Added request cmpl-4e4331c6dc924272a2648cf7ea24712c-0.
INFO 03-01 23:54:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:54:56 [logger.py:42] Received request cmpl-2d4c0d31f11447db8f7da6a513537adc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:56 [async_llm.py:261] Added request cmpl-2d4c0d31f11447db8f7da6a513537adc-0.
INFO 03-01 23:54:58 [logger.py:42] Received request cmpl-300afc901a894551a5c61de2adc55b05-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:58 [async_llm.py:261] Added request cmpl-300afc901a894551a5c61de2adc55b05-0.
INFO 03-01 23:54:59 [logger.py:42] Received request cmpl-75752053d4e04130978ebba7ee11c0e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:59 [async_llm.py:261] Added request cmpl-75752053d4e04130978ebba7ee11c0e2-0.
INFO 03-01 23:55:00 [logger.py:42] Received request cmpl-0a689b8cf91941d0b8ae145ff4b3d9a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:00 [async_llm.py:261] Added request cmpl-0a689b8cf91941d0b8ae145ff4b3d9a6-0.
INFO 03-01 23:55:01 [logger.py:42] Received request cmpl-bde6cd433af740debee7a67334b04e88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:01 [async_llm.py:261] Added request cmpl-bde6cd433af740debee7a67334b04e88-0.
INFO 03-01 23:55:02 [logger.py:42] Received request cmpl-e37c6a9c8be2482e98c4f233b25cb272-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:02 [async_llm.py:261] Added request cmpl-e37c6a9c8be2482e98c4f233b25cb272-0.
INFO 03-01 23:55:03 [logger.py:42] Received request cmpl-1c204b97150a41c78518158f19df1dbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:03 [async_llm.py:261] Added request cmpl-1c204b97150a41c78518158f19df1dbd-0.
INFO 03-01 23:55:04 [logger.py:42] Received request cmpl-997df7bc563e45f286d6386be7e022d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:04 [async_llm.py:261] Added request cmpl-997df7bc563e45f286d6386be7e022d1-0.
INFO 03-01 23:55:05 [logger.py:42] Received request cmpl-73f0ece5996e47038ad7aa9fb9b4be7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:05 [async_llm.py:261] Added request cmpl-73f0ece5996e47038ad7aa9fb9b4be7f-0.
INFO 03-01 23:55:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:55:06 [logger.py:42] Received request cmpl-6afe09a873344018b4c5d34ead52838b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:06 [async_llm.py:261] Added request cmpl-6afe09a873344018b4c5d34ead52838b-0.
INFO 03-01 23:55:07 [logger.py:42] Received request cmpl-2819d331f3624386b630eaadd00abfe9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:07 [async_llm.py:261] Added request cmpl-2819d331f3624386b630eaadd00abfe9-0.
INFO 03-01 23:55:08 [logger.py:42] Received request cmpl-ce2db0c3a5f94fbfb58ec7143377e24c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:08 [async_llm.py:261] Added request cmpl-ce2db0c3a5f94fbfb58ec7143377e24c-0.
INFO 03-01 23:55:09 [logger.py:42] Received request cmpl-0499ab82ef674d10b35260c4cb50aa00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:09 [async_llm.py:261] Added request cmpl-0499ab82ef674d10b35260c4cb50aa00-0.
INFO 03-01 23:55:11 [logger.py:42] Received request cmpl-e790b71738134ae290e5216b3e5d7548-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:11 [async_llm.py:261] Added request cmpl-e790b71738134ae290e5216b3e5d7548-0.
INFO 03-01 23:55:12 [logger.py:42] Received request cmpl-b2de02298aef4a9282f565f0781278e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:12 [async_llm.py:261] Added request cmpl-b2de02298aef4a9282f565f0781278e9-0.
INFO 03-01 23:55:13 [logger.py:42] Received request cmpl-b34d1104a5a8449180e3939633e4382e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:13 [async_llm.py:261] Added request cmpl-b34d1104a5a8449180e3939633e4382e-0.
INFO 03-01 23:55:14 [logger.py:42] Received request cmpl-f9fbaeb7e3e546cc8e8f68e03f551e70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:14 [async_llm.py:261] Added request cmpl-f9fbaeb7e3e546cc8e8f68e03f551e70-0.
INFO 03-01 23:55:15 [logger.py:42] Received request cmpl-e66364c3113749dd8ec5fb432ff5e431-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:15 [async_llm.py:261] Added request cmpl-e66364c3113749dd8ec5fb432ff5e431-0.
INFO 03-01 23:55:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:55:16 [logger.py:42] Received request cmpl-687c39f51aff4af99df1bce0b467f3b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:16 [async_llm.py:261] Added request cmpl-687c39f51aff4af99df1bce0b467f3b7-0.
INFO 03-01 23:55:17 [logger.py:42] Received request cmpl-4cade9e1db3e4286ba1e6d30d81091b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:17 [async_llm.py:261] Added request cmpl-4cade9e1db3e4286ba1e6d30d81091b9-0.
INFO 03-01 23:55:18 [logger.py:42] Received request cmpl-1214563e11fa46b9811407495027e2db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:18 [async_llm.py:261] Added request cmpl-1214563e11fa46b9811407495027e2db-0.
INFO 03-01 23:55:19 [logger.py:42] Received request cmpl-7eed1cf3d5b044a190ab02fb52216b49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:19 [async_llm.py:261] Added request cmpl-7eed1cf3d5b044a190ab02fb52216b49-0.
INFO 03-01 23:55:20 [logger.py:42] Received request cmpl-0afb3bfed8184fe89e3aed50a74f1674-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:20 [async_llm.py:261] Added request cmpl-0afb3bfed8184fe89e3aed50a74f1674-0.
INFO 03-01 23:55:21 [logger.py:42] Received request cmpl-890a160aec2d4b60aeec33ff8d426ae8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:21 [async_llm.py:261] Added request cmpl-890a160aec2d4b60aeec33ff8d426ae8-0.
INFO 03-01 23:55:22 [logger.py:42] Received request cmpl-4c5dff25d1e8465499eaa02354d7ea89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:22 [async_llm.py:261] Added request cmpl-4c5dff25d1e8465499eaa02354d7ea89-0.
INFO 03-01 23:55:24 [logger.py:42] Received request cmpl-46da01fd0ece483f9e055c8b10cc910a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:24 [async_llm.py:261] Added request cmpl-46da01fd0ece483f9e055c8b10cc910a-0.
INFO 03-01 23:55:25 [logger.py:42] Received request cmpl-6894a0118de2497c806600f81de12d33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:25 [async_llm.py:261] Added request cmpl-6894a0118de2497c806600f81de12d33-0.
INFO 03-01 23:55:26 [logger.py:42] Received request cmpl-943489dcf62d4d75858d0ccaa45d68fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:26 [async_llm.py:261] Added request cmpl-943489dcf62d4d75858d0ccaa45d68fb-0.
INFO 03-01 23:55:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-01 23:55:27 [logger.py:42] Received request cmpl-67e459d199664bc889085f9844bddce0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:27 [async_llm.py:261] Added request cmpl-67e459d199664bc889085f9844bddce0-0.
INFO 03-01 23:55:28 [logger.py:42] Received request cmpl-29daa6f8db9f43d18764a7c585dbd270-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:28 [async_llm.py:261] Added request cmpl-29daa6f8db9f43d18764a7c585dbd270-0.
INFO 03-01 23:55:29 [logger.py:42] Received request cmpl-ce4711c7b56a422c87901edda132cd0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:29 [async_llm.py:261] Added request cmpl-ce4711c7b56a422c87901edda132cd0b-0.
INFO 03-01 23:55:30 [logger.py:42] Received request cmpl-9490f142295a424aa3dba2f9a15efa78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:30 [async_llm.py:261] Added request cmpl-9490f142295a424aa3dba2f9a15efa78-0.
INFO 03-01 23:55:31 [logger.py:42] Received request cmpl-e9f9648b46694c27be3530e456ee0977-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:31 [async_llm.py:261] Added request cmpl-e9f9648b46694c27be3530e456ee0977-0.
INFO 03-01 23:55:32 [logger.py:42] Received request cmpl-9a2bd469a5e04d128b0686fcfeab83fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:32 [async_llm.py:261] Added request cmpl-9a2bd469a5e04d128b0686fcfeab83fe-0.
INFO 03-01 23:55:33 [logger.py:42] Received request cmpl-eda98c69017247f3818a5718d53b321f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:33 [async_llm.py:261] Added request cmpl-eda98c69017247f3818a5718d53b321f-0.
INFO 03-01 23:55:34 [logger.py:42] Received request cmpl-a852e0267ab8492581ef5ebe68f88cd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:34 [async_llm.py:261] Added request cmpl-a852e0267ab8492581ef5ebe68f88cd8-0.
INFO 03-01 23:55:35 [logger.py:42] Received request cmpl-436fafb768d741b48f62ba29ea4ca44c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:35 [async_llm.py:261] Added request cmpl-436fafb768d741b48f62ba29ea4ca44c-0.
INFO 03-01 23:55:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:55:37 [logger.py:42] Received request cmpl-b3bf21e291dd4c449fb6bfce2e439c54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:37 [async_llm.py:261] Added request cmpl-b3bf21e291dd4c449fb6bfce2e439c54-0.
INFO 03-01 23:55:38 [logger.py:42] Received request cmpl-4681422832984d77b33ab133fe987538-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:38 [async_llm.py:261] Added request cmpl-4681422832984d77b33ab133fe987538-0.
INFO 03-01 23:55:39 [logger.py:42] Received request cmpl-d16df909b564441db83002b00af4a668-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:39 [async_llm.py:261] Added request cmpl-d16df909b564441db83002b00af4a668-0.
INFO 03-01 23:55:40 [logger.py:42] Received request cmpl-9aa601e0a9ef4821b5838a65b2c127dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:40 [async_llm.py:261] Added request cmpl-9aa601e0a9ef4821b5838a65b2c127dd-0.
INFO 03-01 23:55:41 [logger.py:42] Received request cmpl-c88ecd2c419940d68fbfd3afcfc217da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:41 [async_llm.py:261] Added request cmpl-c88ecd2c419940d68fbfd3afcfc217da-0.
INFO 03-01 23:55:42 [logger.py:42] Received request cmpl-063e41dfde0e411a81cad646973edf99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:42 [async_llm.py:261] Added request cmpl-063e41dfde0e411a81cad646973edf99-0.
INFO 03-01 23:55:43 [logger.py:42] Received request cmpl-dd40219b1ee545db804d442488d7cafa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:43 [async_llm.py:261] Added request cmpl-dd40219b1ee545db804d442488d7cafa-0.
INFO 03-01 23:55:44 [logger.py:42] Received request cmpl-d4f889d7eb5743deb04929e3d2437002-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:44 [async_llm.py:261] Added request cmpl-d4f889d7eb5743deb04929e3d2437002-0.
INFO 03-01 23:55:45 [logger.py:42] Received request cmpl-3f811e18b59a4defafa703b44ecfc165-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:45 [async_llm.py:261] Added request cmpl-3f811e18b59a4defafa703b44ecfc165-0.
INFO 03-01 23:55:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:55:46 [logger.py:42] Received request cmpl-3717ed38985648b59fd719e175e902a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:46 [async_llm.py:261] Added request cmpl-3717ed38985648b59fd719e175e902a8-0.
INFO 03-01 23:55:47 [logger.py:42] Received request cmpl-b2050537590746789c4c81d8cbd089ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:47 [async_llm.py:261] Added request cmpl-b2050537590746789c4c81d8cbd089ed-0.
INFO 03-01 23:55:48 [logger.py:42] Received request cmpl-40c259c55306420e84edc1cce8c619bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:48 [async_llm.py:261] Added request cmpl-40c259c55306420e84edc1cce8c619bf-0.
INFO 03-01 23:55:50 [logger.py:42] Received request cmpl-3c00528ea0e443688d353d058efa6b7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:50 [async_llm.py:261] Added request cmpl-3c00528ea0e443688d353d058efa6b7e-0.
INFO 03-01 23:55:51 [logger.py:42] Received request cmpl-f5671fb233684071b8222ef685b856e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:51 [async_llm.py:261] Added request cmpl-f5671fb233684071b8222ef685b856e3-0.
INFO 03-01 23:55:52 [logger.py:42] Received request cmpl-e15aacd7a61849138a9b036d2ee8e784-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:52 [async_llm.py:261] Added request cmpl-e15aacd7a61849138a9b036d2ee8e784-0.
INFO 03-01 23:55:53 [logger.py:42] Received request cmpl-e04430805b6e4d2897f72af69a10fe18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:53 [async_llm.py:261] Added request cmpl-e04430805b6e4d2897f72af69a10fe18-0.
INFO 03-01 23:55:54 [logger.py:42] Received request cmpl-f060e69c3fd24332a42d44997d978b10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:54 [async_llm.py:261] Added request cmpl-f060e69c3fd24332a42d44997d978b10-0.
INFO 03-01 23:55:55 [logger.py:42] Received request cmpl-f3e4383059854a70a67132ef954d34b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:55 [async_llm.py:261] Added request cmpl-f3e4383059854a70a67132ef954d34b1-0.
INFO 03-01 23:55:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:55:56 [logger.py:42] Received request cmpl-f4de8b7ed5a74f588f0ce48ae48c2a04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:56 [async_llm.py:261] Added request cmpl-f4de8b7ed5a74f588f0ce48ae48c2a04-0.
INFO 03-01 23:55:57 [logger.py:42] Received request cmpl-7ecc83369f67451da7eed470b8256175-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:57 [async_llm.py:261] Added request cmpl-7ecc83369f67451da7eed470b8256175-0.
INFO 03-01 23:55:58 [logger.py:42] Received request cmpl-b0164e18eeda4760b910de46ead9cd6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:58 [async_llm.py:261] Added request cmpl-b0164e18eeda4760b910de46ead9cd6b-0.
INFO 03-01 23:55:59 [logger.py:42] Received request cmpl-e65b50c495b646c98478507ce91eba12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:59 [async_llm.py:261] Added request cmpl-e65b50c495b646c98478507ce91eba12-0.
INFO 03-01 23:56:00 [logger.py:42] Received request cmpl-c3495413175144cfb946cea9e4b5e61e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:00 [async_llm.py:261] Added request cmpl-c3495413175144cfb946cea9e4b5e61e-0.
INFO 03-01 23:56:01 [logger.py:42] Received request cmpl-8633a4f6597f421898f669c34b684d2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:01 [async_llm.py:261] Added request cmpl-8633a4f6597f421898f669c34b684d2a-0.
INFO 03-01 23:56:03 [logger.py:42] Received request cmpl-d867c62cc5d4479684bc0ace5630c6e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:03 [async_llm.py:261] Added request cmpl-d867c62cc5d4479684bc0ace5630c6e9-0.
INFO 03-01 23:56:04 [logger.py:42] Received request cmpl-4e97fccaf34542a7b209869e77f85b5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:04 [async_llm.py:261] Added request cmpl-4e97fccaf34542a7b209869e77f85b5e-0.
INFO 03-01 23:56:05 [logger.py:42] Received request cmpl-1faf1e2685764e3ebdcc593021525b00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:05 [async_llm.py:261] Added request cmpl-1faf1e2685764e3ebdcc593021525b00-0.
INFO 03-01 23:56:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:56:06 [logger.py:42] Received request cmpl-d4b7f54e4d7a435c9aeae20d6234384f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:06 [async_llm.py:261] Added request cmpl-d4b7f54e4d7a435c9aeae20d6234384f-0.
INFO 03-01 23:56:07 [logger.py:42] Received request cmpl-2e19e3982d7e42598c9b3ec973bb0948-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:07 [async_llm.py:261] Added request cmpl-2e19e3982d7e42598c9b3ec973bb0948-0.
INFO 03-01 23:56:08 [logger.py:42] Received request cmpl-fe7c217da4cc4ec6a5fbf8c5eb0fb355-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:08 [async_llm.py:261] Added request cmpl-fe7c217da4cc4ec6a5fbf8c5eb0fb355-0.
INFO 03-01 23:56:09 [logger.py:42] Received request cmpl-1430f84f7ed94b298b85d40ef5d5b3ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:09 [async_llm.py:261] Added request cmpl-1430f84f7ed94b298b85d40ef5d5b3ee-0.
INFO 03-01 23:56:10 [logger.py:42] Received request cmpl-a9faf41b8bf9418e92afec6e8e195e9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:10 [async_llm.py:261] Added request cmpl-a9faf41b8bf9418e92afec6e8e195e9c-0.
INFO 03-01 23:56:11 [logger.py:42] Received request cmpl-a42cea7caaeb4f669518010bd19c5e85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:11 [async_llm.py:261] Added request cmpl-a42cea7caaeb4f669518010bd19c5e85-0.
INFO 03-01 23:56:12 [logger.py:42] Received request cmpl-d2bb4bec4f47486ca280a8aeeac71b0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:12 [async_llm.py:261] Added request cmpl-d2bb4bec4f47486ca280a8aeeac71b0d-0.
INFO 03-01 23:56:13 [logger.py:42] Received request cmpl-2a080c19618c4f848eca0b822a839672-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:13 [async_llm.py:261] Added request cmpl-2a080c19618c4f848eca0b822a839672-0.
INFO 03-01 23:56:14 [logger.py:42] Received request cmpl-6e508a8b080c4ac39b8cd7407bdb0675-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:14 [async_llm.py:261] Added request cmpl-6e508a8b080c4ac39b8cd7407bdb0675-0.
INFO 03-01 23:56:16 [logger.py:42] Received request cmpl-019c6802617b44cc92af8c8d79736ee3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:16 [async_llm.py:261] Added request cmpl-019c6802617b44cc92af8c8d79736ee3-0.
INFO 03-01 23:56:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:56:17 [logger.py:42] Received request cmpl-90cb96de47694d13b8245e7e92d2d4bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:17 [async_llm.py:261] Added request cmpl-90cb96de47694d13b8245e7e92d2d4bb-0.
INFO 03-01 23:56:18 [logger.py:42] Received request cmpl-be36056c11154ababf8246f6aa0b92cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:18 [async_llm.py:261] Added request cmpl-be36056c11154ababf8246f6aa0b92cc-0.
INFO 03-01 23:56:19 [logger.py:42] Received request cmpl-81c72dad955b4713b33bde11868a75ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:19 [async_llm.py:261] Added request cmpl-81c72dad955b4713b33bde11868a75ea-0.
INFO 03-01 23:56:20 [logger.py:42] Received request cmpl-c11dbe9fd52b47e59d18c6df376f8e91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:20 [async_llm.py:261] Added request cmpl-c11dbe9fd52b47e59d18c6df376f8e91-0.
INFO 03-01 23:56:21 [logger.py:42] Received request cmpl-3bcbc28d9d1646b5bd250d581d87b4b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:21 [async_llm.py:261] Added request cmpl-3bcbc28d9d1646b5bd250d581d87b4b2-0.
INFO 03-01 23:56:22 [logger.py:42] Received request cmpl-17d20ed6f18445ad985af968ef1883ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:22 [async_llm.py:261] Added request cmpl-17d20ed6f18445ad985af968ef1883ad-0.
INFO 03-01 23:56:23 [logger.py:42] Received request cmpl-6f432adcd01342709c46ffa1f573699a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:23 [async_llm.py:261] Added request cmpl-6f432adcd01342709c46ffa1f573699a-0.
INFO 03-01 23:56:24 [logger.py:42] Received request cmpl-821cc7868a53421e87d3bdb613cfe7f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:24 [async_llm.py:261] Added request cmpl-821cc7868a53421e87d3bdb613cfe7f2-0.
INFO 03-01 23:56:25 [logger.py:42] Received request cmpl-ef8dfe3c97ab4ea09e67e8af82450c75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:25 [async_llm.py:261] Added request cmpl-ef8dfe3c97ab4ea09e67e8af82450c75-0.
INFO 03-01 23:56:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:56:26 [logger.py:42] Received request cmpl-d0f4c2b0bf444fc0820db2e6e3642134-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:26 [async_llm.py:261] Added request cmpl-d0f4c2b0bf444fc0820db2e6e3642134-0.
INFO 03-01 23:56:27 [logger.py:42] Received request cmpl-d4d6502aa62c42f08fa892450b218984-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:27 [async_llm.py:261] Added request cmpl-d4d6502aa62c42f08fa892450b218984-0.
INFO 03-01 23:56:29 [logger.py:42] Received request cmpl-bad373cd947d4e16b31b4783f43fdbec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:29 [async_llm.py:261] Added request cmpl-bad373cd947d4e16b31b4783f43fdbec-0.
INFO 03-01 23:56:30 [logger.py:42] Received request cmpl-46299004141f40b099896a9b98d569c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:30 [async_llm.py:261] Added request cmpl-46299004141f40b099896a9b98d569c3-0.
INFO 03-01 23:56:31 [logger.py:42] Received request cmpl-bb515f1060004908ac7448c461eeb6f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:31 [async_llm.py:261] Added request cmpl-bb515f1060004908ac7448c461eeb6f5-0.
INFO 03-01 23:56:32 [logger.py:42] Received request cmpl-04778ff563b742448359d94ab2b97245-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:32 [async_llm.py:261] Added request cmpl-04778ff563b742448359d94ab2b97245-0.
INFO 03-01 23:56:33 [logger.py:42] Received request cmpl-b3aec9725c0046138d93cd4c38d5a492-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:33 [async_llm.py:261] Added request cmpl-b3aec9725c0046138d93cd4c38d5a492-0.
INFO 03-01 23:56:34 [logger.py:42] Received request cmpl-2c4591348fd5425f9861b6cabcc66f29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:34 [async_llm.py:261] Added request cmpl-2c4591348fd5425f9861b6cabcc66f29-0.
INFO 03-01 23:56:35 [logger.py:42] Received request cmpl-35cd9c6411f84e8898ab25505e2071b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:35 [async_llm.py:261] Added request cmpl-35cd9c6411f84e8898ab25505e2071b1-0.
INFO 03-01 23:56:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:56:36 [logger.py:42] Received request cmpl-5ed7d425b5ba4153be00d7f35f1369aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:36 [async_llm.py:261] Added request cmpl-5ed7d425b5ba4153be00d7f35f1369aa-0.
INFO 03-01 23:56:37 [logger.py:42] Received request cmpl-511061c183be48319992b9218f8b3ab1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:37 [async_llm.py:261] Added request cmpl-511061c183be48319992b9218f8b3ab1-0.
INFO 03-01 23:56:38 [logger.py:42] Received request cmpl-af8f487d854840b6b733eba9b0d3bbab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:38 [async_llm.py:261] Added request cmpl-af8f487d854840b6b733eba9b0d3bbab-0.
INFO 03-01 23:56:39 [logger.py:42] Received request cmpl-28389cde1570480bb3a8924638a8c8ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:39 [async_llm.py:261] Added request cmpl-28389cde1570480bb3a8924638a8c8ea-0.
INFO 03-01 23:56:40 [logger.py:42] Received request cmpl-238f94d66c2d4736a4df11830aaf5c31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:40 [async_llm.py:261] Added request cmpl-238f94d66c2d4736a4df11830aaf5c31-0.
INFO 03-01 23:56:42 [logger.py:42] Received request cmpl-e9046a0e525f4383ba71b94003bbbb40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:42 [async_llm.py:261] Added request cmpl-e9046a0e525f4383ba71b94003bbbb40-0.
INFO 03-01 23:56:43 [logger.py:42] Received request cmpl-1b6c97c63f8343a0b406c64045c7144c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:43 [async_llm.py:261] Added request cmpl-1b6c97c63f8343a0b406c64045c7144c-0.
INFO 03-01 23:56:44 [logger.py:42] Received request cmpl-f1aa6f720e5b4d9f8f403948ec80043b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:44 [async_llm.py:261] Added request cmpl-f1aa6f720e5b4d9f8f403948ec80043b-0.
INFO 03-01 23:56:45 [logger.py:42] Received request cmpl-4dd794920115460fb3ee52699cbf19ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:45 [async_llm.py:261] Added request cmpl-4dd794920115460fb3ee52699cbf19ef-0.
INFO 03-01 23:56:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:56:46 [logger.py:42] Received request cmpl-dbdde5f93cfb474a8dae9e6de27e904e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:46 [async_llm.py:261] Added request cmpl-dbdde5f93cfb474a8dae9e6de27e904e-0.
INFO 03-01 23:56:47 [logger.py:42] Received request cmpl-a89672ee44ab42afba8b65d3306f38d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:47 [async_llm.py:261] Added request cmpl-a89672ee44ab42afba8b65d3306f38d9-0.
INFO 03-01 23:56:48 [logger.py:42] Received request cmpl-bd8ea7012d3f4369816697a2794806ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:48 [async_llm.py:261] Added request cmpl-bd8ea7012d3f4369816697a2794806ea-0.
INFO 03-01 23:56:49 [logger.py:42] Received request cmpl-883436598ba6429d968f158dca55c386-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:49 [async_llm.py:261] Added request cmpl-883436598ba6429d968f158dca55c386-0.
INFO 03-01 23:56:50 [logger.py:42] Received request cmpl-323440df5ce2412db56e80676bb57f92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:50 [async_llm.py:261] Added request cmpl-323440df5ce2412db56e80676bb57f92-0.
INFO 03-01 23:56:51 [logger.py:42] Received request cmpl-e58ed0b9f2704228afd58ac7f0a27ad8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:51 [async_llm.py:261] Added request cmpl-e58ed0b9f2704228afd58ac7f0a27ad8-0.
INFO 03-01 23:56:52 [logger.py:42] Received request cmpl-5deeb536bc2f42c5b679551a9503ad85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:52 [async_llm.py:261] Added request cmpl-5deeb536bc2f42c5b679551a9503ad85-0.
INFO 03-01 23:56:53 [logger.py:42] Received request cmpl-1c9089f5b6ff42ea9711db0b2cf1b127-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:53 [async_llm.py:261] Added request cmpl-1c9089f5b6ff42ea9711db0b2cf1b127-0.
INFO 03-01 23:56:55 [logger.py:42] Received request cmpl-aff726d831d24716879156b8c4324d47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:55 [async_llm.py:261] Added request cmpl-aff726d831d24716879156b8c4324d47-0.
INFO 03-01 23:56:56 [logger.py:42] Received request cmpl-4dd02aeaea3d4009b0d6b41926246644-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:56 [async_llm.py:261] Added request cmpl-4dd02aeaea3d4009b0d6b41926246644-0.
INFO 03-01 23:56:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:56:57 [logger.py:42] Received request cmpl-385d4ed0c7804e9fa65ed82792af7ef0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:57 [async_llm.py:261] Added request cmpl-385d4ed0c7804e9fa65ed82792af7ef0-0.
INFO 03-01 23:56:58 [logger.py:42] Received request cmpl-16e0add7100c4aa5909089886a3c14b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:58 [async_llm.py:261] Added request cmpl-16e0add7100c4aa5909089886a3c14b2-0.
INFO 03-01 23:56:59 [logger.py:42] Received request cmpl-00e1e457ab7e495e8452963050454386-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:59 [async_llm.py:261] Added request cmpl-00e1e457ab7e495e8452963050454386-0.
INFO 03-01 23:57:00 [logger.py:42] Received request cmpl-70531f3f6e1a4a468f36816687f807e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:00 [async_llm.py:261] Added request cmpl-70531f3f6e1a4a468f36816687f807e5-0.
INFO 03-01 23:57:01 [logger.py:42] Received request cmpl-0905df1266144d6aad246171dcb4a378-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:01 [async_llm.py:261] Added request cmpl-0905df1266144d6aad246171dcb4a378-0.
INFO 03-01 23:57:02 [logger.py:42] Received request cmpl-6b50962d54d24b4f85fd0d86bea2631b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:02 [async_llm.py:261] Added request cmpl-6b50962d54d24b4f85fd0d86bea2631b-0.
INFO 03-01 23:57:03 [logger.py:42] Received request cmpl-8081e80821c34c78a9a6862234c1c412-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:03 [async_llm.py:261] Added request cmpl-8081e80821c34c78a9a6862234c1c412-0.
INFO 03-01 23:57:04 [logger.py:42] Received request cmpl-e66922e70d0241ad926015da371c2904-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:04 [async_llm.py:261] Added request cmpl-e66922e70d0241ad926015da371c2904-0.
INFO 03-01 23:57:05 [logger.py:42] Received request cmpl-a75613d912b54f128d681abbe6dc9a54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:05 [async_llm.py:261] Added request cmpl-a75613d912b54f128d681abbe6dc9a54-0.
INFO 03-01 23:57:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:57:06 [logger.py:42] Received request cmpl-6bab05c9cb9f47c0ba9e30cbf2c2f7cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:06 [async_llm.py:261] Added request cmpl-6bab05c9cb9f47c0ba9e30cbf2c2f7cd-0.
INFO 03-01 23:57:08 [logger.py:42] Received request cmpl-be6af6715f994b5f98cece4c37a869cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:08 [async_llm.py:261] Added request cmpl-be6af6715f994b5f98cece4c37a869cc-0.
INFO 03-01 23:57:09 [logger.py:42] Received request cmpl-14e63293274f48409eade441b555926b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:09 [async_llm.py:261] Added request cmpl-14e63293274f48409eade441b555926b-0.
INFO 03-01 23:57:10 [logger.py:42] Received request cmpl-db4033fbf05244f592b6e016b6c3ffe5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:10 [async_llm.py:261] Added request cmpl-db4033fbf05244f592b6e016b6c3ffe5-0.
INFO 03-01 23:57:11 [logger.py:42] Received request cmpl-31beb046fca946438e6b9adda9e9fe99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:11 [async_llm.py:261] Added request cmpl-31beb046fca946438e6b9adda9e9fe99-0.
INFO 03-01 23:57:12 [logger.py:42] Received request cmpl-acb51a1326bc409caf5d17309843f5dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:12 [async_llm.py:261] Added request cmpl-acb51a1326bc409caf5d17309843f5dc-0.
INFO 03-01 23:57:13 [logger.py:42] Received request cmpl-7cff5182b0c6401aada202dd0f386e5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:13 [async_llm.py:261] Added request cmpl-7cff5182b0c6401aada202dd0f386e5d-0.
INFO 03-01 23:57:14 [logger.py:42] Received request cmpl-1fd7fa246099455cbb6e91119f2139cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:14 [async_llm.py:261] Added request cmpl-1fd7fa246099455cbb6e91119f2139cb-0.
INFO 03-01 23:57:15 [logger.py:42] Received request cmpl-255ad1eea60e493f8a323c60ff8a8293-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:15 [async_llm.py:261] Added request cmpl-255ad1eea60e493f8a323c60ff8a8293-0.
INFO 03-01 23:57:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:57:16 [logger.py:42] Received request cmpl-eda23705759348d8986be13264ba4dfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:16 [async_llm.py:261] Added request cmpl-eda23705759348d8986be13264ba4dfa-0.
INFO 03-01 23:57:17 [logger.py:42] Received request cmpl-3987d0ee751440e89e7942b5873ebe29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:17 [async_llm.py:261] Added request cmpl-3987d0ee751440e89e7942b5873ebe29-0.
INFO 03-01 23:57:18 [logger.py:42] Received request cmpl-95cc3d7a34db42d38d400bc2507fdec0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:18 [async_llm.py:261] Added request cmpl-95cc3d7a34db42d38d400bc2507fdec0-0.
INFO 03-01 23:57:19 [logger.py:42] Received request cmpl-0cf9f2af88a74f66922a92573833a3d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:19 [async_llm.py:261] Added request cmpl-0cf9f2af88a74f66922a92573833a3d4-0.
INFO 03-01 23:57:21 [logger.py:42] Received request cmpl-188970bf8e584ff2a90b49e82e2881c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:21 [async_llm.py:261] Added request cmpl-188970bf8e584ff2a90b49e82e2881c5-0.
INFO 03-01 23:57:22 [logger.py:42] Received request cmpl-c8976c6def884ca48d5f94b99f9c8dd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:22 [async_llm.py:261] Added request cmpl-c8976c6def884ca48d5f94b99f9c8dd1-0.
INFO 03-01 23:57:23 [logger.py:42] Received request cmpl-b9074eb037ed4f88a2251d9292d6b33e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:23 [async_llm.py:261] Added request cmpl-b9074eb037ed4f88a2251d9292d6b33e-0.
INFO 03-01 23:57:24 [logger.py:42] Received request cmpl-3d563857dc7742bb9fe5a8373966c6ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:24 [async_llm.py:261] Added request cmpl-3d563857dc7742bb9fe5a8373966c6ba-0.
INFO 03-01 23:57:25 [logger.py:42] Received request cmpl-a4a1afd8e75a44eb9ad1bf9457cb5151-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:25 [async_llm.py:261] Added request cmpl-a4a1afd8e75a44eb9ad1bf9457cb5151-0.
INFO 03-01 23:57:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:57:26 [logger.py:42] Received request cmpl-f129269bc068406ea7c72010999a772f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:26 [async_llm.py:261] Added request cmpl-f129269bc068406ea7c72010999a772f-0.
INFO 03-01 23:57:27 [logger.py:42] Received request cmpl-b4119e1937894b4bb4406ae137b7f349-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:27 [async_llm.py:261] Added request cmpl-b4119e1937894b4bb4406ae137b7f349-0.
INFO 03-01 23:57:28 [logger.py:42] Received request cmpl-2f05afd445614d5c9c9a780c58595a42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:28 [async_llm.py:261] Added request cmpl-2f05afd445614d5c9c9a780c58595a42-0.
INFO 03-01 23:57:29 [logger.py:42] Received request cmpl-159aae276d534127a53ac0cfb482b71b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:29 [async_llm.py:261] Added request cmpl-159aae276d534127a53ac0cfb482b71b-0.
INFO 03-01 23:57:30 [logger.py:42] Received request cmpl-685924a7e8304181946afe32e155b43a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:30 [async_llm.py:261] Added request cmpl-685924a7e8304181946afe32e155b43a-0.
INFO 03-01 23:57:31 [logger.py:42] Received request cmpl-6416af0ec3634926848a13bc055bff7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:31 [async_llm.py:261] Added request cmpl-6416af0ec3634926848a13bc055bff7f-0.
INFO 03-01 23:57:32 [logger.py:42] Received request cmpl-5e67f98eeaec47ecb7bd0cc95ba4b77b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:32 [async_llm.py:261] Added request cmpl-5e67f98eeaec47ecb7bd0cc95ba4b77b-0.
INFO 03-01 23:57:34 [logger.py:42] Received request cmpl-2b2b83717d894138bf34ecf9576b99df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:34 [async_llm.py:261] Added request cmpl-2b2b83717d894138bf34ecf9576b99df-0.
INFO 03-01 23:57:35 [logger.py:42] Received request cmpl-6b4c86c80b0646b59dcb632fa18c0654-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:35 [async_llm.py:261] Added request cmpl-6b4c86c80b0646b59dcb632fa18c0654-0.
INFO 03-01 23:57:36 [logger.py:42] Received request cmpl-257b0c78d8734915bc8686c542ef880f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:36 [async_llm.py:261] Added request cmpl-257b0c78d8734915bc8686c542ef880f-0.
INFO 03-01 23:57:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-01 23:57:37 [logger.py:42] Received request cmpl-b8f6794fc07342d1a19402c044d11362-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:37 [async_llm.py:261] Added request cmpl-b8f6794fc07342d1a19402c044d11362-0.
INFO 03-01 23:57:38 [logger.py:42] Received request cmpl-580b940793a24286a1f3b659004c37ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:38 [async_llm.py:261] Added request cmpl-580b940793a24286a1f3b659004c37ec-0.
INFO 03-01 23:57:39 [logger.py:42] Received request cmpl-c37fb0af78d44ced89b84423150917c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:39 [async_llm.py:261] Added request cmpl-c37fb0af78d44ced89b84423150917c8-0.
INFO 03-01 23:57:40 [logger.py:42] Received request cmpl-7bf1e50cbe3b425cb290988f100968bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:40 [async_llm.py:261] Added request cmpl-7bf1e50cbe3b425cb290988f100968bb-0.
INFO 03-01 23:57:41 [logger.py:42] Received request cmpl-9cbdb40701ce4dda9e659f24fb21528e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:41 [async_llm.py:261] Added request cmpl-9cbdb40701ce4dda9e659f24fb21528e-0.
INFO 03-01 23:57:42 [logger.py:42] Received request cmpl-d7bda2bd8dfb440e96bd273843fa383e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:42 [async_llm.py:261] Added request cmpl-d7bda2bd8dfb440e96bd273843fa383e-0.
INFO 03-01 23:57:43 [logger.py:42] Received request cmpl-d50c86f7fd3c49d28d7a0cb4e2f82740-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:43 [async_llm.py:261] Added request cmpl-d50c86f7fd3c49d28d7a0cb4e2f82740-0.
INFO 03-01 23:57:44 [logger.py:42] Received request cmpl-965a6fc92357488180cdee1e650313b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:44 [async_llm.py:261] Added request cmpl-965a6fc92357488180cdee1e650313b9-0.
INFO 03-01 23:57:45 [logger.py:42] Received request cmpl-0915c6c83ede436fb96d047c191b3cef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:45 [async_llm.py:261] Added request cmpl-0915c6c83ede436fb96d047c191b3cef-0.
INFO 03-01 23:57:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:57:47 [logger.py:42] Received request cmpl-9320bc0769e24e198d0a747406072e3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:47 [async_llm.py:261] Added request cmpl-9320bc0769e24e198d0a747406072e3b-0.
INFO 03-01 23:57:48 [logger.py:42] Received request cmpl-23fb5d66f3b24081bc4c75855dca2fb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:48 [async_llm.py:261] Added request cmpl-23fb5d66f3b24081bc4c75855dca2fb3-0.
INFO 03-01 23:57:49 [logger.py:42] Received request cmpl-23c40cba433c48aab7049f19a334422a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:49 [async_llm.py:261] Added request cmpl-23c40cba433c48aab7049f19a334422a-0.
INFO 03-01 23:57:50 [logger.py:42] Received request cmpl-b81664d0157b4e698bf29b328a22f6a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:50 [async_llm.py:261] Added request cmpl-b81664d0157b4e698bf29b328a22f6a5-0.
INFO 03-01 23:57:51 [logger.py:42] Received request cmpl-b23d5160103641fbaa1d72b88350d906-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:51 [async_llm.py:261] Added request cmpl-b23d5160103641fbaa1d72b88350d906-0.
INFO 03-01 23:57:52 [logger.py:42] Received request cmpl-5d6864b22bec4974a6bf34f2201edb42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:52 [async_llm.py:261] Added request cmpl-5d6864b22bec4974a6bf34f2201edb42-0.
INFO 03-01 23:57:53 [logger.py:42] Received request cmpl-a272b630b1304533ad1f77a698f92679-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:53 [async_llm.py:261] Added request cmpl-a272b630b1304533ad1f77a698f92679-0.
INFO 03-01 23:57:54 [logger.py:42] Received request cmpl-dada3d18a57049a0a6dffd35ce4aaae4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:54 [async_llm.py:261] Added request cmpl-dada3d18a57049a0a6dffd35ce4aaae4-0.
INFO 03-01 23:57:55 [logger.py:42] Received request cmpl-fad25720bc684073876e26d4a9e21b38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:55 [async_llm.py:261] Added request cmpl-fad25720bc684073876e26d4a9e21b38-0.
INFO 03-01 23:57:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:57:56 [logger.py:42] Received request cmpl-fdadfc2104e54018a5245b512d3c08a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:56 [async_llm.py:261] Added request cmpl-fdadfc2104e54018a5245b512d3c08a0-0.
INFO 03-01 23:57:57 [logger.py:42] Received request cmpl-9638ea46f1e549c78bb14764c6edfc3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:57 [async_llm.py:261] Added request cmpl-9638ea46f1e549c78bb14764c6edfc3c-0.
INFO 03-01 23:57:58 [logger.py:42] Received request cmpl-00cfe4b4ec65407fa482d5b322d2eb6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:58 [async_llm.py:261] Added request cmpl-00cfe4b4ec65407fa482d5b322d2eb6e-0.
INFO 03-01 23:58:00 [logger.py:42] Received request cmpl-cc2838456c18497796fefcbe68b6a4c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:00 [async_llm.py:261] Added request cmpl-cc2838456c18497796fefcbe68b6a4c2-0.
INFO 03-01 23:58:01 [logger.py:42] Received request cmpl-ca472f6ed26940c8803a4d06ac0af0af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:01 [async_llm.py:261] Added request cmpl-ca472f6ed26940c8803a4d06ac0af0af-0.
INFO 03-01 23:58:02 [logger.py:42] Received request cmpl-8a887e3974b54833a8c2fcf2248734b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:02 [async_llm.py:261] Added request cmpl-8a887e3974b54833a8c2fcf2248734b4-0.
INFO 03-01 23:58:03 [logger.py:42] Received request cmpl-2428f0f607214e93941d7abf9d431c07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:03 [async_llm.py:261] Added request cmpl-2428f0f607214e93941d7abf9d431c07-0.
INFO 03-01 23:58:04 [logger.py:42] Received request cmpl-d0c375b6de2c4462b0fda5b1fc521ff0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:04 [async_llm.py:261] Added request cmpl-d0c375b6de2c4462b0fda5b1fc521ff0-0.
INFO 03-01 23:58:05 [logger.py:42] Received request cmpl-5759b38a24b04d3596546ef5ce58f96c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:05 [async_llm.py:261] Added request cmpl-5759b38a24b04d3596546ef5ce58f96c-0.
INFO 03-01 23:58:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:58:06 [logger.py:42] Received request cmpl-a95aa60738434d3fa8c85fbd04c33c76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:06 [async_llm.py:261] Added request cmpl-a95aa60738434d3fa8c85fbd04c33c76-0.
INFO 03-01 23:58:07 [logger.py:42] Received request cmpl-eb5f50481e8d4ae7ab6c5fc0649a5d6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:07 [async_llm.py:261] Added request cmpl-eb5f50481e8d4ae7ab6c5fc0649a5d6b-0.
INFO 03-01 23:58:08 [logger.py:42] Received request cmpl-d1bd0fa283f34466a7b6fe3ff419ddfb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:08 [async_llm.py:261] Added request cmpl-d1bd0fa283f34466a7b6fe3ff419ddfb-0.
INFO 03-01 23:58:09 [logger.py:42] Received request cmpl-339d2deadac643b4a7cff6d0e86ef548-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:09 [async_llm.py:261] Added request cmpl-339d2deadac643b4a7cff6d0e86ef548-0.
INFO 03-01 23:58:10 [logger.py:42] Received request cmpl-5771dc253bd046f78abad870112112fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:10 [async_llm.py:261] Added request cmpl-5771dc253bd046f78abad870112112fb-0.
INFO 03-01 23:58:11 [logger.py:42] Received request cmpl-c4740fe36419479db6f1cc39e7c9d673-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:11 [async_llm.py:261] Added request cmpl-c4740fe36419479db6f1cc39e7c9d673-0.
INFO 03-01 23:58:13 [logger.py:42] Received request cmpl-cdb5ca0f44df4e499426c75bd540504e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:13 [async_llm.py:261] Added request cmpl-cdb5ca0f44df4e499426c75bd540504e-0.
INFO 03-01 23:58:14 [logger.py:42] Received request cmpl-b3754d331f4c49fcab25fe590befe8db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:14 [async_llm.py:261] Added request cmpl-b3754d331f4c49fcab25fe590befe8db-0.
INFO 03-01 23:58:15 [logger.py:42] Received request cmpl-c4e4c62e208f4a548eec929382f118c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:15 [async_llm.py:261] Added request cmpl-c4e4c62e208f4a548eec929382f118c3-0.
INFO 03-01 23:58:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:58:16 [logger.py:42] Received request cmpl-d5adadc3e4ca4b50bcf9f5832c2bd05f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:16 [async_llm.py:261] Added request cmpl-d5adadc3e4ca4b50bcf9f5832c2bd05f-0.
INFO 03-01 23:58:17 [logger.py:42] Received request cmpl-833d731cbe9142b492b5eb71012d4e8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:17 [async_llm.py:261] Added request cmpl-833d731cbe9142b492b5eb71012d4e8f-0.
INFO 03-01 23:58:18 [logger.py:42] Received request cmpl-b2fdbc713fe54b73819a2fb42a0844f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:18 [async_llm.py:261] Added request cmpl-b2fdbc713fe54b73819a2fb42a0844f7-0.
INFO 03-01 23:58:19 [logger.py:42] Received request cmpl-d2a17bc4a0604b01b29b3e528ebd78ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:19 [async_llm.py:261] Added request cmpl-d2a17bc4a0604b01b29b3e528ebd78ca-0.
INFO 03-01 23:58:20 [logger.py:42] Received request cmpl-4a0e576db9664be3b645481f16e2668f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:20 [async_llm.py:261] Added request cmpl-4a0e576db9664be3b645481f16e2668f-0.
INFO 03-01 23:58:21 [logger.py:42] Received request cmpl-737fbd5b3c194852b4457e9b5bca9f38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:21 [async_llm.py:261] Added request cmpl-737fbd5b3c194852b4457e9b5bca9f38-0.
INFO 03-01 23:58:22 [logger.py:42] Received request cmpl-fcc14c5592c74a4abc5a236ea134b7c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:22 [async_llm.py:261] Added request cmpl-fcc14c5592c74a4abc5a236ea134b7c2-0.
INFO 03-01 23:58:23 [logger.py:42] Received request cmpl-f66605f8320949229ec634736ee59c37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:23 [async_llm.py:261] Added request cmpl-f66605f8320949229ec634736ee59c37-0.
INFO 03-01 23:58:24 [logger.py:42] Received request cmpl-3425fe2ab97640a28fc536a5590461ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:24 [async_llm.py:261] Added request cmpl-3425fe2ab97640a28fc536a5590461ef-0.
INFO 03-01 23:58:26 [logger.py:42] Received request cmpl-fb6b54ed35a544fcab6bc4aadea5556c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:26 [async_llm.py:261] Added request cmpl-fb6b54ed35a544fcab6bc4aadea5556c-0.
INFO 03-01 23:58:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:58:27 [logger.py:42] Received request cmpl-22333370681141dcae3af589e0e86c83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:27 [async_llm.py:261] Added request cmpl-22333370681141dcae3af589e0e86c83-0.
INFO 03-01 23:58:28 [logger.py:42] Received request cmpl-1ae10f2bd30b43ae999045f99386d30d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:28 [async_llm.py:261] Added request cmpl-1ae10f2bd30b43ae999045f99386d30d-0.
INFO 03-01 23:58:29 [logger.py:42] Received request cmpl-9bed34b37ac94138a22c5bf8105c8132-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:29 [async_llm.py:261] Added request cmpl-9bed34b37ac94138a22c5bf8105c8132-0.
INFO 03-01 23:58:30 [logger.py:42] Received request cmpl-a858a35dad0b49bebd9ed25607e2ad3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:30 [async_llm.py:261] Added request cmpl-a858a35dad0b49bebd9ed25607e2ad3b-0.
INFO 03-01 23:58:31 [logger.py:42] Received request cmpl-b893b3f35c994be593b8439dd07d7b7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:31 [async_llm.py:261] Added request cmpl-b893b3f35c994be593b8439dd07d7b7b-0.
INFO 03-01 23:58:32 [logger.py:42] Received request cmpl-dc57501b44d94a20b7f57adc9afb288e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:32 [async_llm.py:261] Added request cmpl-dc57501b44d94a20b7f57adc9afb288e-0.
INFO 03-01 23:58:33 [logger.py:42] Received request cmpl-f5d228cf9a374b2396eb005da094f420-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:33 [async_llm.py:261] Added request cmpl-f5d228cf9a374b2396eb005da094f420-0.
INFO 03-01 23:58:34 [logger.py:42] Received request cmpl-859c71f7db6441e8a47db0fa00b91af7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:34 [async_llm.py:261] Added request cmpl-859c71f7db6441e8a47db0fa00b91af7-0.
INFO 03-01 23:58:35 [logger.py:42] Received request cmpl-c2cb4f49cba54243b9c5bd0d55f2ef35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:35 [async_llm.py:261] Added request cmpl-c2cb4f49cba54243b9c5bd0d55f2ef35-0.
INFO 03-01 23:58:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:58:36 [logger.py:42] Received request cmpl-cec6152778a347a1b3dd6314496d2438-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:36 [async_llm.py:261] Added request cmpl-cec6152778a347a1b3dd6314496d2438-0.
INFO 03-01 23:58:37 [logger.py:42] Received request cmpl-54477340b62e4201b88e1f6bb7a7d64b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:37 [async_llm.py:261] Added request cmpl-54477340b62e4201b88e1f6bb7a7d64b-0.
INFO 03-01 23:58:39 [logger.py:42] Received request cmpl-60a905e1b6cc4cb79c5110ebc2a15b6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:39 [async_llm.py:261] Added request cmpl-60a905e1b6cc4cb79c5110ebc2a15b6a-0.
INFO 03-01 23:58:40 [logger.py:42] Received request cmpl-b3057258aa474460b4c794d7737c49bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:40 [async_llm.py:261] Added request cmpl-b3057258aa474460b4c794d7737c49bf-0.
INFO 03-01 23:58:41 [logger.py:42] Received request cmpl-271fd65012b142668582c44e71078088-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:41 [async_llm.py:261] Added request cmpl-271fd65012b142668582c44e71078088-0.
INFO 03-01 23:58:42 [logger.py:42] Received request cmpl-8cd7eb4ea3d34f87834af3e6df048d01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:42 [async_llm.py:261] Added request cmpl-8cd7eb4ea3d34f87834af3e6df048d01-0.
INFO 03-01 23:58:43 [logger.py:42] Received request cmpl-bc281c9be7ce42dbb1289ccfdafa7f64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:43 [async_llm.py:261] Added request cmpl-bc281c9be7ce42dbb1289ccfdafa7f64-0.
INFO 03-01 23:58:44 [logger.py:42] Received request cmpl-a7f16cbc1f894893b3e0f9fc24ba5c01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:44 [async_llm.py:261] Added request cmpl-a7f16cbc1f894893b3e0f9fc24ba5c01-0.
INFO 03-01 23:58:45 [logger.py:42] Received request cmpl-6cb3949b3ee94d1fa2d8af34bbe94f7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:45 [async_llm.py:261] Added request cmpl-6cb3949b3ee94d1fa2d8af34bbe94f7d-0.
INFO 03-01 23:58:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:58:46 [logger.py:42] Received request cmpl-49f4b74fe7dc4db3aa834053f9ab06f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:46 [async_llm.py:261] Added request cmpl-49f4b74fe7dc4db3aa834053f9ab06f0-0.
INFO 03-01 23:58:47 [logger.py:42] Received request cmpl-b3e61dcb0aba40b1b24f452cbe285430-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:47 [async_llm.py:261] Added request cmpl-b3e61dcb0aba40b1b24f452cbe285430-0.
INFO 03-01 23:58:48 [logger.py:42] Received request cmpl-537c4130a86c4877a9eea2b81a80e708-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:48 [async_llm.py:261] Added request cmpl-537c4130a86c4877a9eea2b81a80e708-0.
INFO 03-01 23:58:49 [logger.py:42] Received request cmpl-c489312a9ec046ecb4c076665f5ee008-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:49 [async_llm.py:261] Added request cmpl-c489312a9ec046ecb4c076665f5ee008-0.
INFO 03-01 23:58:50 [logger.py:42] Received request cmpl-eb7b6be9b78c4182ab4a001e07b05b6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:50 [async_llm.py:261] Added request cmpl-eb7b6be9b78c4182ab4a001e07b05b6b-0.
INFO 03-01 23:58:52 [logger.py:42] Received request cmpl-8626e4f669044d0fb2b3ea6d76bb0e83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:52 [async_llm.py:261] Added request cmpl-8626e4f669044d0fb2b3ea6d76bb0e83-0.
INFO 03-01 23:58:53 [logger.py:42] Received request cmpl-a8b5d9e94d1045b08c2a24ccbfc0e073-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:53 [async_llm.py:261] Added request cmpl-a8b5d9e94d1045b08c2a24ccbfc0e073-0.
INFO 03-01 23:58:54 [logger.py:42] Received request cmpl-b7030c999a0044f4a6c9acf17865c593-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:54 [async_llm.py:261] Added request cmpl-b7030c999a0044f4a6c9acf17865c593-0.
INFO 03-01 23:58:55 [logger.py:42] Received request cmpl-e4e2c5c65f354305a50a8e5bc9800b97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:55 [async_llm.py:261] Added request cmpl-e4e2c5c65f354305a50a8e5bc9800b97-0.
INFO 03-01 23:58:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:58:56 [logger.py:42] Received request cmpl-373f3eb115b441bea02a588538233293-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:56 [async_llm.py:261] Added request cmpl-373f3eb115b441bea02a588538233293-0.
INFO 03-01 23:58:57 [logger.py:42] Received request cmpl-43e550b9879348419a6cddb4e37c7ff9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:57 [async_llm.py:261] Added request cmpl-43e550b9879348419a6cddb4e37c7ff9-0.
INFO 03-01 23:58:58 [logger.py:42] Received request cmpl-23a377e6cef14124a06b31a8ac30af24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:58 [async_llm.py:261] Added request cmpl-23a377e6cef14124a06b31a8ac30af24-0.
INFO 03-01 23:58:59 [logger.py:42] Received request cmpl-239d4b5ddcf54ac198e95a3d9571d4a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:59 [async_llm.py:261] Added request cmpl-239d4b5ddcf54ac198e95a3d9571d4a2-0.
INFO 03-01 23:59:00 [logger.py:42] Received request cmpl-f19cdd029cee4dd9859f5fcecd12c74d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:00 [async_llm.py:261] Added request cmpl-f19cdd029cee4dd9859f5fcecd12c74d-0.
INFO 03-01 23:59:01 [logger.py:42] Received request cmpl-12b9e0880d5744488ef98dff0a377eab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:01 [async_llm.py:261] Added request cmpl-12b9e0880d5744488ef98dff0a377eab-0.
INFO 03-01 23:59:02 [logger.py:42] Received request cmpl-8820b4faa58040e091835be671c3ad7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:02 [async_llm.py:261] Added request cmpl-8820b4faa58040e091835be671c3ad7c-0.
INFO 03-01 23:59:03 [logger.py:42] Received request cmpl-f3f3f39901924beb878ce9e7b87d8f61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:03 [async_llm.py:261] Added request cmpl-f3f3f39901924beb878ce9e7b87d8f61-0.
INFO 03-01 23:59:05 [logger.py:42] Received request cmpl-5a034d2148824a47b1dd83712354daaa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:05 [async_llm.py:261] Added request cmpl-5a034d2148824a47b1dd83712354daaa-0.
INFO 03-01 23:59:06 [logger.py:42] Received request cmpl-c59b62aad80e47f69c667187639c7854-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:06 [async_llm.py:261] Added request cmpl-c59b62aad80e47f69c667187639c7854-0.
INFO 03-01 23:59:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:59:07 [logger.py:42] Received request cmpl-d178b303a5c442f5ae38f56cc68c1cb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:07 [async_llm.py:261] Added request cmpl-d178b303a5c442f5ae38f56cc68c1cb3-0.
INFO 03-01 23:59:08 [logger.py:42] Received request cmpl-5ee57426dd19478f96f5f3229fb3497d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:08 [async_llm.py:261] Added request cmpl-5ee57426dd19478f96f5f3229fb3497d-0.
INFO 03-01 23:59:09 [logger.py:42] Received request cmpl-b6ee92e409a14c569e4156f4235b918f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:09 [async_llm.py:261] Added request cmpl-b6ee92e409a14c569e4156f4235b918f-0.
INFO 03-01 23:59:10 [logger.py:42] Received request cmpl-e71bb82c703a4a19a817f98654345e0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:10 [async_llm.py:261] Added request cmpl-e71bb82c703a4a19a817f98654345e0f-0.
INFO 03-01 23:59:11 [logger.py:42] Received request cmpl-9466612429864fedb84b4d960d9c73e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:11 [async_llm.py:261] Added request cmpl-9466612429864fedb84b4d960d9c73e2-0.
INFO 03-01 23:59:12 [logger.py:42] Received request cmpl-5978b744997346bdabf887a07df4cc68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:12 [async_llm.py:261] Added request cmpl-5978b744997346bdabf887a07df4cc68-0.
INFO 03-01 23:59:13 [logger.py:42] Received request cmpl-494fac24398e4c8cab57023e5fd3d271-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:13 [async_llm.py:261] Added request cmpl-494fac24398e4c8cab57023e5fd3d271-0.
INFO 03-01 23:59:14 [logger.py:42] Received request cmpl-76603e6515e74af2a6468c356663f3ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:14 [async_llm.py:261] Added request cmpl-76603e6515e74af2a6468c356663f3ca-0.
INFO 03-01 23:59:15 [logger.py:42] Received request cmpl-8e493f3dfd1b49df8739cd8af5de96c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:15 [async_llm.py:261] Added request cmpl-8e493f3dfd1b49df8739cd8af5de96c1-0.
INFO 03-01 23:59:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:59:16 [logger.py:42] Received request cmpl-f9092ff4341a4aa5b4a3bbf37dddba2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:16 [async_llm.py:261] Added request cmpl-f9092ff4341a4aa5b4a3bbf37dddba2e-0.
INFO 03-01 23:59:18 [logger.py:42] Received request cmpl-0e4238e5eea14d0dbfad356088206bc2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:18 [async_llm.py:261] Added request cmpl-0e4238e5eea14d0dbfad356088206bc2-0.
INFO 03-01 23:59:19 [logger.py:42] Received request cmpl-df2e7e7f1ea04e97ba122c1a7899714a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:19 [async_llm.py:261] Added request cmpl-df2e7e7f1ea04e97ba122c1a7899714a-0.
INFO 03-01 23:59:20 [logger.py:42] Received request cmpl-23d76e11d2a748cd9422912972a2749c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:20 [async_llm.py:261] Added request cmpl-23d76e11d2a748cd9422912972a2749c-0.
INFO 03-01 23:59:21 [logger.py:42] Received request cmpl-c9a1fee6dcb44c899627cb271b85eb0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:21 [async_llm.py:261] Added request cmpl-c9a1fee6dcb44c899627cb271b85eb0f-0.
INFO 03-01 23:59:22 [logger.py:42] Received request cmpl-65bcd2cc396149379c6060ca8b99f4f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:22 [async_llm.py:261] Added request cmpl-65bcd2cc396149379c6060ca8b99f4f8-0.
INFO 03-01 23:59:23 [logger.py:42] Received request cmpl-da18299bfdcc414681ef5911b29248c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:23 [async_llm.py:261] Added request cmpl-da18299bfdcc414681ef5911b29248c6-0.
INFO 03-01 23:59:24 [logger.py:42] Received request cmpl-4b795cacc79f4bf0a10e1742dafc52b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:24 [async_llm.py:261] Added request cmpl-4b795cacc79f4bf0a10e1742dafc52b2-0.
INFO 03-01 23:59:25 [logger.py:42] Received request cmpl-ebf39f03dd2f470fbf2e6b527d59c959-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:25 [async_llm.py:261] Added request cmpl-ebf39f03dd2f470fbf2e6b527d59c959-0.
INFO 03-01 23:59:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:59:26 [logger.py:42] Received request cmpl-a7b569aa9b394c12adfc0d8a4f94267e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:26 [async_llm.py:261] Added request cmpl-a7b569aa9b394c12adfc0d8a4f94267e-0.
INFO 03-01 23:59:27 [logger.py:42] Received request cmpl-f6831f708964427089f5cddb4db2b035-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:27 [async_llm.py:261] Added request cmpl-f6831f708964427089f5cddb4db2b035-0.
INFO 03-01 23:59:28 [logger.py:42] Received request cmpl-0c9f0b0001d345f191c434fff7f9d3bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:28 [async_llm.py:261] Added request cmpl-0c9f0b0001d345f191c434fff7f9d3bf-0.
INFO 03-01 23:59:29 [logger.py:42] Received request cmpl-9bf42d0a5c6942bd937e52b631328c65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:29 [async_llm.py:261] Added request cmpl-9bf42d0a5c6942bd937e52b631328c65-0.
INFO 03-01 23:59:31 [logger.py:42] Received request cmpl-b5697182dbea4c6598d7104e64596aad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:31 [async_llm.py:261] Added request cmpl-b5697182dbea4c6598d7104e64596aad-0.
INFO 03-01 23:59:32 [logger.py:42] Received request cmpl-8786f38bda0547c2ba4ceeba15da50d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:32 [async_llm.py:261] Added request cmpl-8786f38bda0547c2ba4ceeba15da50d6-0.
INFO 03-01 23:59:33 [logger.py:42] Received request cmpl-55528f2a1f31420aa649718225e3a6ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:33 [async_llm.py:261] Added request cmpl-55528f2a1f31420aa649718225e3a6ac-0.
INFO 03-01 23:59:34 [logger.py:42] Received request cmpl-4f231dbcfbee447096a8021603b94aaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:34 [async_llm.py:261] Added request cmpl-4f231dbcfbee447096a8021603b94aaf-0.
INFO 03-01 23:59:35 [logger.py:42] Received request cmpl-d2170db192824b7e96da6ce95160a623-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:35 [async_llm.py:261] Added request cmpl-d2170db192824b7e96da6ce95160a623-0.
INFO 03-01 23:59:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:59:36 [logger.py:42] Received request cmpl-daf98631bf7b49ed9e855abde5b84331-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:36 [async_llm.py:261] Added request cmpl-daf98631bf7b49ed9e855abde5b84331-0.
INFO 03-01 23:59:37 [logger.py:42] Received request cmpl-3282e40bb68b485a87b602d013ec60cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:37 [async_llm.py:261] Added request cmpl-3282e40bb68b485a87b602d013ec60cf-0.
INFO 03-01 23:59:38 [logger.py:42] Received request cmpl-8106961174fe451481149585b4eac708-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:38 [async_llm.py:261] Added request cmpl-8106961174fe451481149585b4eac708-0.
INFO 03-01 23:59:39 [logger.py:42] Received request cmpl-79ca9889e5fc4e33b53036cbc151897b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:39 [async_llm.py:261] Added request cmpl-79ca9889e5fc4e33b53036cbc151897b-0.
INFO 03-01 23:59:40 [logger.py:42] Received request cmpl-f2a87a79413d45029ef6dc5e9d9b24d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:40 [async_llm.py:261] Added request cmpl-f2a87a79413d45029ef6dc5e9d9b24d4-0.
INFO 03-01 23:59:41 [logger.py:42] Received request cmpl-37b4f9333dc2448689a8547dd628bc9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:41 [async_llm.py:261] Added request cmpl-37b4f9333dc2448689a8547dd628bc9d-0.
INFO 03-01 23:59:42 [logger.py:42] Received request cmpl-4dac7806c1fb4364bf5a04b5ee6476be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:42 [async_llm.py:261] Added request cmpl-4dac7806c1fb4364bf5a04b5ee6476be-0.
INFO 03-01 23:59:44 [logger.py:42] Received request cmpl-bcb2de32e2054ab1be0138e335174875-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:44 [async_llm.py:261] Added request cmpl-bcb2de32e2054ab1be0138e335174875-0.
INFO 03-01 23:59:45 [logger.py:42] Received request cmpl-5bb4b34977b64a438f4fd713be9c3180-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:45 [async_llm.py:261] Added request cmpl-5bb4b34977b64a438f4fd713be9c3180-0.
INFO 03-01 23:59:46 [logger.py:42] Received request cmpl-29b0267d8e59420c8637183ce2c17194-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:46 [async_llm.py:261] Added request cmpl-29b0267d8e59420c8637183ce2c17194-0.
INFO 03-01 23:59:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-01 23:59:47 [logger.py:42] Received request cmpl-92483ad3f63a4df585fd0a4322120cc2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:47 [async_llm.py:261] Added request cmpl-92483ad3f63a4df585fd0a4322120cc2-0.
INFO 03-01 23:59:48 [logger.py:42] Received request cmpl-cad39d930bac44b2965415edd8e273af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:48 [async_llm.py:261] Added request cmpl-cad39d930bac44b2965415edd8e273af-0.
INFO 03-01 23:59:49 [logger.py:42] Received request cmpl-0653db060c3a4599b3af829d86ed24e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:49 [async_llm.py:261] Added request cmpl-0653db060c3a4599b3af829d86ed24e8-0.
INFO 03-01 23:59:50 [logger.py:42] Received request cmpl-786591e0c7c841e3ab0976fece05314d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:50 [async_llm.py:261] Added request cmpl-786591e0c7c841e3ab0976fece05314d-0.
INFO 03-01 23:59:51 [logger.py:42] Received request cmpl-0dd3e7b8b6614d00ac41bb8857f1e747-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:51 [async_llm.py:261] Added request cmpl-0dd3e7b8b6614d00ac41bb8857f1e747-0.
INFO 03-01 23:59:52 [logger.py:42] Received request cmpl-e80275ada5e146c986128bae6111003a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:52 [async_llm.py:261] Added request cmpl-e80275ada5e146c986128bae6111003a-0.
INFO 03-01 23:59:53 [logger.py:42] Received request cmpl-13f92bb8413c4a7084381f8abb5ca560-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:53 [async_llm.py:261] Added request cmpl-13f92bb8413c4a7084381f8abb5ca560-0.
INFO 03-01 23:59:54 [logger.py:42] Received request cmpl-b4cf76c9f73b45d4969683aaa73bdd28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:54 [async_llm.py:261] Added request cmpl-b4cf76c9f73b45d4969683aaa73bdd28-0.
INFO 03-01 23:59:55 [logger.py:42] Received request cmpl-05ce272d38354fac83832568f173b6b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:55 [async_llm.py:261] Added request cmpl-05ce272d38354fac83832568f173b6b1-0.
INFO 03-01 23:59:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-01 23:59:57 [logger.py:42] Received request cmpl-550162e52f254dc88d78323c5a34c8ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:57 [async_llm.py:261] Added request cmpl-550162e52f254dc88d78323c5a34c8ed-0.
INFO 03-01 23:59:58 [logger.py:42] Received request cmpl-3c99e8624f834e7b938b1ff2a8940078-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:58 [async_llm.py:261] Added request cmpl-3c99e8624f834e7b938b1ff2a8940078-0.
INFO 03-01 23:59:59 [logger.py:42] Received request cmpl-e8b63725b11942bd8b7c4b86ad446a53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:59 [async_llm.py:261] Added request cmpl-e8b63725b11942bd8b7c4b86ad446a53-0.
INFO 03-02 00:00:00 [logger.py:42] Received request cmpl-8a7b0204f9124d9b9f68dc018f198989-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:00 [async_llm.py:261] Added request cmpl-8a7b0204f9124d9b9f68dc018f198989-0.
INFO 03-02 00:00:01 [logger.py:42] Received request cmpl-ff17c03967074780aeddb332b5e3544c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:01 [async_llm.py:261] Added request cmpl-ff17c03967074780aeddb332b5e3544c-0.
INFO 03-02 00:00:02 [logger.py:42] Received request cmpl-1267b8e30dce4d15a293748fcbf9024b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:02 [async_llm.py:261] Added request cmpl-1267b8e30dce4d15a293748fcbf9024b-0.
INFO 03-02 00:00:03 [logger.py:42] Received request cmpl-411f017e84924c55b5e8075af1cc5be6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:03 [async_llm.py:261] Added request cmpl-411f017e84924c55b5e8075af1cc5be6-0.
INFO 03-02 00:00:04 [logger.py:42] Received request cmpl-8fffab7fbb57428bb45f83dba21e4d5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:04 [async_llm.py:261] Added request cmpl-8fffab7fbb57428bb45f83dba21e4d5f-0.
INFO 03-02 00:00:05 [logger.py:42] Received request cmpl-b1c99989a5934b47b2ec7c213c23c7bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:05 [async_llm.py:261] Added request cmpl-b1c99989a5934b47b2ec7c213c23c7bf-0.
INFO 03-02 00:00:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:00:06 [logger.py:42] Received request cmpl-7e3d25ab614f4431a1e9809b25e9df26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:06 [async_llm.py:261] Added request cmpl-7e3d25ab614f4431a1e9809b25e9df26-0.
INFO 03-02 00:00:07 [logger.py:42] Received request cmpl-6978e5b903424ddd840bb173ff24ab81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:07 [async_llm.py:261] Added request cmpl-6978e5b903424ddd840bb173ff24ab81-0.
INFO 03-02 00:00:08 [logger.py:42] Received request cmpl-5380330f371148d38d0bf21db30d8c82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:08 [async_llm.py:261] Added request cmpl-5380330f371148d38d0bf21db30d8c82-0.
INFO 03-02 00:00:10 [logger.py:42] Received request cmpl-d917ac7dcddd48d5a274a302d3719557-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:10 [async_llm.py:261] Added request cmpl-d917ac7dcddd48d5a274a302d3719557-0.
INFO 03-02 00:00:11 [logger.py:42] Received request cmpl-0ad88322cf0748189179c7411289f22a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:11 [async_llm.py:261] Added request cmpl-0ad88322cf0748189179c7411289f22a-0.
INFO 03-02 00:00:12 [logger.py:42] Received request cmpl-b93d8d92b64341c6886d55469ee016c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:12 [async_llm.py:261] Added request cmpl-b93d8d92b64341c6886d55469ee016c3-0.
INFO 03-02 00:00:13 [logger.py:42] Received request cmpl-325f1c77bf574d4e96aade7cf2cd719c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:13 [async_llm.py:261] Added request cmpl-325f1c77bf574d4e96aade7cf2cd719c-0.
INFO 03-02 00:00:14 [logger.py:42] Received request cmpl-687e6e1bbbb046f7b15a98c641627f6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:14 [async_llm.py:261] Added request cmpl-687e6e1bbbb046f7b15a98c641627f6f-0.
INFO 03-02 00:00:15 [logger.py:42] Received request cmpl-b26fac0154b34b688e1c71960e7919ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:15 [async_llm.py:261] Added request cmpl-b26fac0154b34b688e1c71960e7919ea-0.
INFO 03-02 00:00:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:00:16 [logger.py:42] Received request cmpl-4e437d5d16f54c6fa64f9c5e58bf11c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:16 [async_llm.py:261] Added request cmpl-4e437d5d16f54c6fa64f9c5e58bf11c1-0.
INFO 03-02 00:00:17 [logger.py:42] Received request cmpl-fe85fb2c1a1b4c389b60a530a3d5d81b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:17 [async_llm.py:261] Added request cmpl-fe85fb2c1a1b4c389b60a530a3d5d81b-0.
INFO 03-02 00:00:18 [logger.py:42] Received request cmpl-1b44f837ab1a4a0b88b595292a4c05bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:18 [async_llm.py:261] Added request cmpl-1b44f837ab1a4a0b88b595292a4c05bb-0.
INFO 03-02 00:00:19 [logger.py:42] Received request cmpl-b7174d616e3a4b06982150f12dbe2159-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:19 [async_llm.py:261] Added request cmpl-b7174d616e3a4b06982150f12dbe2159-0.
INFO 03-02 00:00:20 [logger.py:42] Received request cmpl-bd2098ca2b304325bf527cebfd426052-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:20 [async_llm.py:261] Added request cmpl-bd2098ca2b304325bf527cebfd426052-0.
INFO 03-02 00:00:21 [logger.py:42] Received request cmpl-fee258dc0c3d430695d236e26d1d10bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:21 [async_llm.py:261] Added request cmpl-fee258dc0c3d430695d236e26d1d10bc-0.
INFO 03-02 00:00:23 [logger.py:42] Received request cmpl-8fd790abdb244845941a992dbf382353-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:23 [async_llm.py:261] Added request cmpl-8fd790abdb244845941a992dbf382353-0.
INFO 03-02 00:00:24 [logger.py:42] Received request cmpl-bf403342a672449ca40e95cf1fa40630-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:24 [async_llm.py:261] Added request cmpl-bf403342a672449ca40e95cf1fa40630-0.
INFO 03-02 00:00:25 [logger.py:42] Received request cmpl-57de25abf9d9473ebcda505361a3b818-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:25 [async_llm.py:261] Added request cmpl-57de25abf9d9473ebcda505361a3b818-0.
INFO 03-02 00:00:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:00:26 [logger.py:42] Received request cmpl-c6934e222bd14f91ba9b01405cb0566d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:26 [async_llm.py:261] Added request cmpl-c6934e222bd14f91ba9b01405cb0566d-0.
INFO 03-02 00:00:27 [logger.py:42] Received request cmpl-0b361c21fca340af9797ca6186e6e535-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:27 [async_llm.py:261] Added request cmpl-0b361c21fca340af9797ca6186e6e535-0.
INFO 03-02 00:00:28 [logger.py:42] Received request cmpl-42bf2bc73341437595f3784e15352d2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:28 [async_llm.py:261] Added request cmpl-42bf2bc73341437595f3784e15352d2b-0.
INFO 03-02 00:00:29 [logger.py:42] Received request cmpl-8fd61dc1fb284dab979b445b113f625f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:29 [async_llm.py:261] Added request cmpl-8fd61dc1fb284dab979b445b113f625f-0.
INFO 03-02 00:00:30 [logger.py:42] Received request cmpl-0c2bbb49cd1e4445aa32a1bcd4912673-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:30 [async_llm.py:261] Added request cmpl-0c2bbb49cd1e4445aa32a1bcd4912673-0.
INFO 03-02 00:00:31 [logger.py:42] Received request cmpl-eec9504b90c0462db66f17e0c390cc02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:31 [async_llm.py:261] Added request cmpl-eec9504b90c0462db66f17e0c390cc02-0.
INFO 03-02 00:00:32 [logger.py:42] Received request cmpl-217c4b74568f4596a89c7fc925d88347-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:32 [async_llm.py:261] Added request cmpl-217c4b74568f4596a89c7fc925d88347-0.
INFO 03-02 00:00:33 [logger.py:42] Received request cmpl-dec7d71f2b554d609871a9b37cfe6d18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:33 [async_llm.py:261] Added request cmpl-dec7d71f2b554d609871a9b37cfe6d18-0.
INFO 03-02 00:00:34 [logger.py:42] Received request cmpl-1d0f14b3b0014c1db5bdecf2d93b93f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:34 [async_llm.py:261] Added request cmpl-1d0f14b3b0014c1db5bdecf2d93b93f2-0.
INFO 03-02 00:00:36 [logger.py:42] Received request cmpl-9bb62f1d81c945b88ee502b40742356a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:36 [async_llm.py:261] Added request cmpl-9bb62f1d81c945b88ee502b40742356a-0.
INFO 03-02 00:00:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:00:37 [logger.py:42] Received request cmpl-b13ff7fad5a0432aa3d848d342b81f53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:37 [async_llm.py:261] Added request cmpl-b13ff7fad5a0432aa3d848d342b81f53-0.
INFO 03-02 00:00:38 [logger.py:42] Received request cmpl-d54d3c262049453ea57726eb68fa5379-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:38 [async_llm.py:261] Added request cmpl-d54d3c262049453ea57726eb68fa5379-0.
INFO 03-02 00:00:39 [logger.py:42] Received request cmpl-bf300ef4ba0a4e879e60324ebbf95571-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:39 [async_llm.py:261] Added request cmpl-bf300ef4ba0a4e879e60324ebbf95571-0.
INFO 03-02 00:00:40 [logger.py:42] Received request cmpl-b65841929d984e51a60b9927e7a538c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:40 [async_llm.py:261] Added request cmpl-b65841929d984e51a60b9927e7a538c7-0.
INFO 03-02 00:00:41 [logger.py:42] Received request cmpl-f305cbb712714c1a9eb739af8550df18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:41 [async_llm.py:261] Added request cmpl-f305cbb712714c1a9eb739af8550df18-0.
INFO 03-02 00:00:42 [logger.py:42] Received request cmpl-2d8a7eae21774f1596cd3ed026418cfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:42 [async_llm.py:261] Added request cmpl-2d8a7eae21774f1596cd3ed026418cfa-0.
INFO 03-02 00:00:43 [logger.py:42] Received request cmpl-ddcb712e0c504463af9a6f5081a4b3f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:43 [async_llm.py:261] Added request cmpl-ddcb712e0c504463af9a6f5081a4b3f3-0.
INFO 03-02 00:00:44 [logger.py:42] Received request cmpl-366668976db245d3acc6d3e091cd40dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:44 [async_llm.py:261] Added request cmpl-366668976db245d3acc6d3e091cd40dd-0.
INFO 03-02 00:00:45 [logger.py:42] Received request cmpl-77a4b173b3c3481cb1f96f1fec575c05-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:45 [async_llm.py:261] Added request cmpl-77a4b173b3c3481cb1f96f1fec575c05-0.
INFO 03-02 00:00:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:00:46 [logger.py:42] Received request cmpl-ccb61322ae7e45c0b37f4a8756904d39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:46 [async_llm.py:261] Added request cmpl-ccb61322ae7e45c0b37f4a8756904d39-0.
INFO 03-02 00:00:47 [logger.py:42] Received request cmpl-22893c5ad67d416fa72c1425b0a0cee0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:47 [async_llm.py:261] Added request cmpl-22893c5ad67d416fa72c1425b0a0cee0-0.
INFO 03-02 00:00:49 [logger.py:42] Received request cmpl-cb3ae99e0b2a42059f37d28b27884461-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:49 [async_llm.py:261] Added request cmpl-cb3ae99e0b2a42059f37d28b27884461-0.
INFO 03-02 00:00:50 [logger.py:42] Received request cmpl-2f584dd1a7354a9cbba83e094140fa88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:50 [async_llm.py:261] Added request cmpl-2f584dd1a7354a9cbba83e094140fa88-0.
INFO 03-02 00:00:51 [logger.py:42] Received request cmpl-6656c5b44a1146bfac33a2aa531f3509-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:51 [async_llm.py:261] Added request cmpl-6656c5b44a1146bfac33a2aa531f3509-0.
INFO 03-02 00:00:52 [logger.py:42] Received request cmpl-35287ab4d52c455e9830eed8d285abb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:52 [async_llm.py:261] Added request cmpl-35287ab4d52c455e9830eed8d285abb2-0.
INFO 03-02 00:00:53 [logger.py:42] Received request cmpl-42b1ca0e893740029966017ab4be16d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:53 [async_llm.py:261] Added request cmpl-42b1ca0e893740029966017ab4be16d5-0.
INFO 03-02 00:00:54 [logger.py:42] Received request cmpl-ff6650b91d8e45d6bd3cab0f3ac909d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:54 [async_llm.py:261] Added request cmpl-ff6650b91d8e45d6bd3cab0f3ac909d5-0.
INFO 03-02 00:00:55 [logger.py:42] Received request cmpl-aec02b965d1f4d95b0d1edcf298bd2a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:55 [async_llm.py:261] Added request cmpl-aec02b965d1f4d95b0d1edcf298bd2a7-0.
INFO 03-02 00:00:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:00:56 [logger.py:42] Received request cmpl-cf147a1fff744d479a2728c552e517b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:56 [async_llm.py:261] Added request cmpl-cf147a1fff744d479a2728c552e517b8-0.
INFO 03-02 00:00:57 [logger.py:42] Received request cmpl-e3c73a86e3fb4703a6153dd113961ac2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:57 [async_llm.py:261] Added request cmpl-e3c73a86e3fb4703a6153dd113961ac2-0.
INFO 03-02 00:00:58 [logger.py:42] Received request cmpl-2edae56c56f548c695b150cd0907dd42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:58 [async_llm.py:261] Added request cmpl-2edae56c56f548c695b150cd0907dd42-0.
INFO 03-02 00:00:59 [logger.py:42] Received request cmpl-f32e11b322c244b0a03a0d64cc500193-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:59 [async_llm.py:261] Added request cmpl-f32e11b322c244b0a03a0d64cc500193-0.
INFO 03-02 00:01:00 [logger.py:42] Received request cmpl-0ff0d48875ac4e04b6c38954e2524543-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:00 [async_llm.py:261] Added request cmpl-0ff0d48875ac4e04b6c38954e2524543-0.
INFO 03-02 00:01:02 [logger.py:42] Received request cmpl-ddba41c090e94956b232c2754ac601a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:02 [async_llm.py:261] Added request cmpl-ddba41c090e94956b232c2754ac601a4-0.
INFO 03-02 00:01:03 [logger.py:42] Received request cmpl-89bae8beaf8f40339beb86515feda854-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:03 [async_llm.py:261] Added request cmpl-89bae8beaf8f40339beb86515feda854-0.
INFO 03-02 00:01:04 [logger.py:42] Received request cmpl-ccce80e2fb5348c28bb2505bed12f2f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:04 [async_llm.py:261] Added request cmpl-ccce80e2fb5348c28bb2505bed12f2f8-0.
INFO 03-02 00:01:05 [logger.py:42] Received request cmpl-eb6b5d53657e41d3a885735d519c06aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:05 [async_llm.py:261] Added request cmpl-eb6b5d53657e41d3a885735d519c06aa-0.
INFO 03-02 00:01:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:01:06 [logger.py:42] Received request cmpl-82ec3e82af084059ba70d599452e1afc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:06 [async_llm.py:261] Added request cmpl-82ec3e82af084059ba70d599452e1afc-0.
INFO 03-02 00:01:07 [logger.py:42] Received request cmpl-3fe5bd0c0a5c4c0a88ca809662264030-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:07 [async_llm.py:261] Added request cmpl-3fe5bd0c0a5c4c0a88ca809662264030-0.
INFO 03-02 00:01:08 [logger.py:42] Received request cmpl-3846d6d822814bb7a2a95ab26d44c75e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:08 [async_llm.py:261] Added request cmpl-3846d6d822814bb7a2a95ab26d44c75e-0.
INFO 03-02 00:01:09 [logger.py:42] Received request cmpl-c0a2d7f77d15418db4dcbc5b010c5fa6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:09 [async_llm.py:261] Added request cmpl-c0a2d7f77d15418db4dcbc5b010c5fa6-0.
INFO 03-02 00:01:10 [logger.py:42] Received request cmpl-6c5114bb5f6644c897886dfb7e186b2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:10 [async_llm.py:261] Added request cmpl-6c5114bb5f6644c897886dfb7e186b2a-0.
INFO 03-02 00:01:11 [logger.py:42] Received request cmpl-ca571f7022234156808996584125f640-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:11 [async_llm.py:261] Added request cmpl-ca571f7022234156808996584125f640-0.
INFO 03-02 00:01:12 [logger.py:42] Received request cmpl-b98fd3a6e5c64615915d7faaf2459906-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:12 [async_llm.py:261] Added request cmpl-b98fd3a6e5c64615915d7faaf2459906-0.
INFO 03-02 00:01:13 [logger.py:42] Received request cmpl-5260642cba774fbe982c51666aaef904-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:13 [async_llm.py:261] Added request cmpl-5260642cba774fbe982c51666aaef904-0.
INFO 03-02 00:01:15 [logger.py:42] Received request cmpl-cc567e33fa0b41e69a6fee1cb5cc08b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:15 [async_llm.py:261] Added request cmpl-cc567e33fa0b41e69a6fee1cb5cc08b6-0.
INFO 03-02 00:01:16 [logger.py:42] Received request cmpl-c12bdb9550744da695be2b9d9497c21a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:16 [async_llm.py:261] Added request cmpl-c12bdb9550744da695be2b9d9497c21a-0.
INFO 03-02 00:01:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:01:17 [logger.py:42] Received request cmpl-28214b71ec4343f29a61bfe3a66299f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:17 [async_llm.py:261] Added request cmpl-28214b71ec4343f29a61bfe3a66299f7-0.
INFO 03-02 00:01:18 [logger.py:42] Received request cmpl-850d8af785b3444ea05cf98b2b1b565a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:18 [async_llm.py:261] Added request cmpl-850d8af785b3444ea05cf98b2b1b565a-0.
INFO 03-02 00:01:19 [logger.py:42] Received request cmpl-ed6f41680084468d9fddcf3cf5c6177c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:19 [async_llm.py:261] Added request cmpl-ed6f41680084468d9fddcf3cf5c6177c-0.
INFO 03-02 00:01:20 [logger.py:42] Received request cmpl-2f7ff7c4488e494ba25e56ae177b311f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:20 [async_llm.py:261] Added request cmpl-2f7ff7c4488e494ba25e56ae177b311f-0.
INFO 03-02 00:01:21 [logger.py:42] Received request cmpl-f7e27b0744174afcadfa17f5bf14adc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:21 [async_llm.py:261] Added request cmpl-f7e27b0744174afcadfa17f5bf14adc3-0.
INFO 03-02 00:01:22 [logger.py:42] Received request cmpl-0bc9071b743b424181b70750831e4b7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:22 [async_llm.py:261] Added request cmpl-0bc9071b743b424181b70750831e4b7f-0.
INFO 03-02 00:01:23 [logger.py:42] Received request cmpl-0abf7fe59d024ec5b881121100eef879-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:23 [async_llm.py:261] Added request cmpl-0abf7fe59d024ec5b881121100eef879-0.
INFO 03-02 00:01:24 [logger.py:42] Received request cmpl-15e25655ca324f76adfd9f3c45ec2ed1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:24 [async_llm.py:261] Added request cmpl-15e25655ca324f76adfd9f3c45ec2ed1-0.
INFO 03-02 00:01:25 [logger.py:42] Received request cmpl-6b9a9c507f3a4a75964cb079db18b057-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:25 [async_llm.py:261] Added request cmpl-6b9a9c507f3a4a75964cb079db18b057-0.
INFO 03-02 00:01:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:01:26 [logger.py:42] Received request cmpl-185a0693a46e4435a44a38944743ad8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:26 [async_llm.py:261] Added request cmpl-185a0693a46e4435a44a38944743ad8c-0.
INFO 03-02 00:01:28 [logger.py:42] Received request cmpl-497a6f0df3a543c8a05a78e04eb82591-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:28 [async_llm.py:261] Added request cmpl-497a6f0df3a543c8a05a78e04eb82591-0.
INFO 03-02 00:01:29 [logger.py:42] Received request cmpl-cdcc4f6cf5774fb59360fa083a1a7fa8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:29 [async_llm.py:261] Added request cmpl-cdcc4f6cf5774fb59360fa083a1a7fa8-0.
INFO 03-02 00:01:30 [logger.py:42] Received request cmpl-f1ff966c3b5c458bb57fe69a6522ef0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:30 [async_llm.py:261] Added request cmpl-f1ff966c3b5c458bb57fe69a6522ef0c-0.
INFO 03-02 00:01:31 [logger.py:42] Received request cmpl-b5d8573793e14b47b142029a042ac8bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:31 [async_llm.py:261] Added request cmpl-b5d8573793e14b47b142029a042ac8bc-0.
INFO 03-02 00:01:32 [logger.py:42] Received request cmpl-8d9866c71272403499471a6db5b6d998-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:32 [async_llm.py:261] Added request cmpl-8d9866c71272403499471a6db5b6d998-0.
INFO 03-02 00:01:33 [logger.py:42] Received request cmpl-532b14f618de46d3b4ac454159f2c29c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:33 [async_llm.py:261] Added request cmpl-532b14f618de46d3b4ac454159f2c29c-0.
INFO 03-02 00:01:34 [logger.py:42] Received request cmpl-cc565090554b4a11b48497bf2a1b5668-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:34 [async_llm.py:261] Added request cmpl-cc565090554b4a11b48497bf2a1b5668-0.
INFO 03-02 00:01:35 [logger.py:42] Received request cmpl-51f16d0baa3e417480fc33301071640d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:35 [async_llm.py:261] Added request cmpl-51f16d0baa3e417480fc33301071640d-0.
INFO 03-02 00:01:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:01:36 [logger.py:42] Received request cmpl-942eb1e31eb54ebc8f01c54607f6fa0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:36 [async_llm.py:261] Added request cmpl-942eb1e31eb54ebc8f01c54607f6fa0c-0.
INFO 03-02 00:01:37 [logger.py:42] Received request cmpl-71af75498e8a410a9c229eea0335414e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:37 [async_llm.py:261] Added request cmpl-71af75498e8a410a9c229eea0335414e-0.
INFO 03-02 00:01:38 [logger.py:42] Received request cmpl-d401225bc60e4775bd54ac2a340feaf5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:38 [async_llm.py:261] Added request cmpl-d401225bc60e4775bd54ac2a340feaf5-0.
INFO 03-02 00:01:39 [logger.py:42] Received request cmpl-6056231da2854006af75ba64bf9e2f1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:39 [async_llm.py:261] Added request cmpl-6056231da2854006af75ba64bf9e2f1e-0.
INFO 03-02 00:01:41 [logger.py:42] Received request cmpl-f1b99bb6da2b4c179e6f86c1db37f929-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:41 [async_llm.py:261] Added request cmpl-f1b99bb6da2b4c179e6f86c1db37f929-0.
INFO 03-02 00:01:42 [logger.py:42] Received request cmpl-85b70a6b135c4e01a10a0eb355085960-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:42 [async_llm.py:261] Added request cmpl-85b70a6b135c4e01a10a0eb355085960-0.
INFO 03-02 00:01:43 [logger.py:42] Received request cmpl-4ab7547675024125a46774e199816290-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:43 [async_llm.py:261] Added request cmpl-4ab7547675024125a46774e199816290-0.
INFO 03-02 00:01:44 [logger.py:42] Received request cmpl-a4a7770c22e84daaaca9002bd4eb2ef3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:44 [async_llm.py:261] Added request cmpl-a4a7770c22e84daaaca9002bd4eb2ef3-0.
INFO 03-02 00:01:45 [logger.py:42] Received request cmpl-0be1da973482438d835fb951e49e3a58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:45 [async_llm.py:261] Added request cmpl-0be1da973482438d835fb951e49e3a58-0.
INFO 03-02 00:01:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:01:46 [logger.py:42] Received request cmpl-24ffdbd2826747ebbe6876d4531cf889-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:46 [async_llm.py:261] Added request cmpl-24ffdbd2826747ebbe6876d4531cf889-0.
INFO 03-02 00:01:47 [logger.py:42] Received request cmpl-9f05c9ef2bac4890b6dfe0230080d976-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:47 [async_llm.py:261] Added request cmpl-9f05c9ef2bac4890b6dfe0230080d976-0.
INFO 03-02 00:01:48 [logger.py:42] Received request cmpl-7269e4f9696743928e65663cc22ca52e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:48 [async_llm.py:261] Added request cmpl-7269e4f9696743928e65663cc22ca52e-0.
INFO 03-02 00:01:49 [logger.py:42] Received request cmpl-b21b430c3b5c425d98ea78d55de49b8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:49 [async_llm.py:261] Added request cmpl-b21b430c3b5c425d98ea78d55de49b8d-0.
INFO 03-02 00:01:50 [logger.py:42] Received request cmpl-a80b9d9dc19a4b65819fe352104fe896-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:50 [async_llm.py:261] Added request cmpl-a80b9d9dc19a4b65819fe352104fe896-0.
INFO 03-02 00:01:51 [logger.py:42] Received request cmpl-feb98444b4144ad380d22ecbaca086e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:51 [async_llm.py:261] Added request cmpl-feb98444b4144ad380d22ecbaca086e1-0.
INFO 03-02 00:01:52 [logger.py:42] Received request cmpl-cc551948216e494cbdcbd7ff858c20e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:52 [async_llm.py:261] Added request cmpl-cc551948216e494cbdcbd7ff858c20e8-0.
INFO 03-02 00:01:54 [logger.py:42] Received request cmpl-7f95c1d61df24be6b235f5c45761a3f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:54 [async_llm.py:261] Added request cmpl-7f95c1d61df24be6b235f5c45761a3f3-0.
INFO 03-02 00:01:55 [logger.py:42] Received request cmpl-de26d83973104c80b00210665f12a90e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:55 [async_llm.py:261] Added request cmpl-de26d83973104c80b00210665f12a90e-0.
INFO 03-02 00:01:56 [logger.py:42] Received request cmpl-43d067d1730d4d6cae26a7b8da48236e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:56 [async_llm.py:261] Added request cmpl-43d067d1730d4d6cae26a7b8da48236e-0.
INFO 03-02 00:01:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:01:57 [logger.py:42] Received request cmpl-4b04c64dce6444a5b9a3ed11389d13bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:57 [async_llm.py:261] Added request cmpl-4b04c64dce6444a5b9a3ed11389d13bf-0.
INFO 03-02 00:01:58 [logger.py:42] Received request cmpl-d18fd85f26904033acdcc817ce0fcee2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:58 [async_llm.py:261] Added request cmpl-d18fd85f26904033acdcc817ce0fcee2-0.
INFO 03-02 00:01:59 [logger.py:42] Received request cmpl-525010a8004b49e1a1b40b0744b9acc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:59 [async_llm.py:261] Added request cmpl-525010a8004b49e1a1b40b0744b9acc3-0.
INFO 03-02 00:02:00 [logger.py:42] Received request cmpl-a959adbc656f41429ef5e6f4a10b4319-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:00 [async_llm.py:261] Added request cmpl-a959adbc656f41429ef5e6f4a10b4319-0.
INFO 03-02 00:02:01 [logger.py:42] Received request cmpl-573b08027d8e4a00b4d0cb550a33e0a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:01 [async_llm.py:261] Added request cmpl-573b08027d8e4a00b4d0cb550a33e0a4-0.
INFO 03-02 00:02:02 [logger.py:42] Received request cmpl-30982802672a4d9099901ec6468dd194-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:02 [async_llm.py:261] Added request cmpl-30982802672a4d9099901ec6468dd194-0.
INFO 03-02 00:02:03 [logger.py:42] Received request cmpl-bd3a2ec6f9d847b3bcf55cb4df84d9cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:03 [async_llm.py:261] Added request cmpl-bd3a2ec6f9d847b3bcf55cb4df84d9cd-0.
INFO 03-02 00:02:04 [logger.py:42] Received request cmpl-d0c6b7f3a27145a89b466f91b6730895-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:04 [async_llm.py:261] Added request cmpl-d0c6b7f3a27145a89b466f91b6730895-0.
INFO 03-02 00:02:05 [logger.py:42] Received request cmpl-9664406b1455403bb08581da7ca8aaa4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:05 [async_llm.py:261] Added request cmpl-9664406b1455403bb08581da7ca8aaa4-0.
INFO 03-02 00:02:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:02:07 [logger.py:42] Received request cmpl-9298f63ffc304ee0b773eda9213f2431-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:07 [async_llm.py:261] Added request cmpl-9298f63ffc304ee0b773eda9213f2431-0.
INFO 03-02 00:02:08 [logger.py:42] Received request cmpl-6e11bec0f8cf44d986239a3f81bb2cc8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:08 [async_llm.py:261] Added request cmpl-6e11bec0f8cf44d986239a3f81bb2cc8-0.
INFO 03-02 00:02:09 [logger.py:42] Received request cmpl-e2bb9e300a1443639742c65eccba58f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:09 [async_llm.py:261] Added request cmpl-e2bb9e300a1443639742c65eccba58f8-0.
INFO 03-02 00:02:10 [logger.py:42] Received request cmpl-5caec41248104ada9761502f15522bc6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:10 [async_llm.py:261] Added request cmpl-5caec41248104ada9761502f15522bc6-0.
INFO 03-02 00:02:11 [logger.py:42] Received request cmpl-88a92a8b687d41a4aac38ea5ac37233a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:11 [async_llm.py:261] Added request cmpl-88a92a8b687d41a4aac38ea5ac37233a-0.
INFO 03-02 00:02:12 [logger.py:42] Received request cmpl-1029f2166d964329a32c5322d3630092-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:12 [async_llm.py:261] Added request cmpl-1029f2166d964329a32c5322d3630092-0.
INFO 03-02 00:02:13 [logger.py:42] Received request cmpl-708823adadcc4c55a7ffee0fb22d1315-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:13 [async_llm.py:261] Added request cmpl-708823adadcc4c55a7ffee0fb22d1315-0.
INFO 03-02 00:02:14 [logger.py:42] Received request cmpl-50a9e24852db4fa189a9fd970d91c2fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:14 [async_llm.py:261] Added request cmpl-50a9e24852db4fa189a9fd970d91c2fd-0.
INFO 03-02 00:02:15 [logger.py:42] Received request cmpl-c5b92ee3e82148d98bc0c72a9c426c81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:15 [async_llm.py:261] Added request cmpl-c5b92ee3e82148d98bc0c72a9c426c81-0.
INFO 03-02 00:02:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:02:16 [logger.py:42] Received request cmpl-10e260fb672f4453aa02a1c63a0605c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:16 [async_llm.py:261] Added request cmpl-10e260fb672f4453aa02a1c63a0605c2-0.
INFO 03-02 00:02:17 [logger.py:42] Received request cmpl-bef005bba1924fd9b35a4ca33ae51811-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:17 [async_llm.py:261] Added request cmpl-bef005bba1924fd9b35a4ca33ae51811-0.
INFO 03-02 00:02:18 [logger.py:42] Received request cmpl-8cbf9c42af1c477a95de2cf37110e277-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:18 [async_llm.py:261] Added request cmpl-8cbf9c42af1c477a95de2cf37110e277-0.
INFO 03-02 00:02:20 [logger.py:42] Received request cmpl-658f37b76b4b44c2a1194623c691051c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:20 [async_llm.py:261] Added request cmpl-658f37b76b4b44c2a1194623c691051c-0.
INFO 03-02 00:02:21 [logger.py:42] Received request cmpl-b0a0eac36ffa4bb7a0ca50f63b3e50c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:21 [async_llm.py:261] Added request cmpl-b0a0eac36ffa4bb7a0ca50f63b3e50c4-0.
INFO 03-02 00:02:22 [logger.py:42] Received request cmpl-5ae34d613eb142f085633e7667e897c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:22 [async_llm.py:261] Added request cmpl-5ae34d613eb142f085633e7667e897c9-0.
INFO 03-02 00:02:23 [logger.py:42] Received request cmpl-08c6aba30f804a71bc31eccbc1e6fa42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:23 [async_llm.py:261] Added request cmpl-08c6aba30f804a71bc31eccbc1e6fa42-0.
INFO 03-02 00:02:24 [logger.py:42] Received request cmpl-426620605dd444e699f9693628827611-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:24 [async_llm.py:261] Added request cmpl-426620605dd444e699f9693628827611-0.
INFO 03-02 00:02:25 [logger.py:42] Received request cmpl-cea64d6a78864cb5a194b899d5f5be5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:25 [async_llm.py:261] Added request cmpl-cea64d6a78864cb5a194b899d5f5be5e-0.
INFO 03-02 00:02:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:02:26 [logger.py:42] Received request cmpl-eea7a23d21ab4b11958c12ecc4437385-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:26 [async_llm.py:261] Added request cmpl-eea7a23d21ab4b11958c12ecc4437385-0.
INFO 03-02 00:02:27 [logger.py:42] Received request cmpl-7ad53f3c67204c83b9776de4d6890c5c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:27 [async_llm.py:261] Added request cmpl-7ad53f3c67204c83b9776de4d6890c5c-0.
INFO 03-02 00:02:28 [logger.py:42] Received request cmpl-3817b06f88f04bd6b3374eb3de501ca1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:28 [async_llm.py:261] Added request cmpl-3817b06f88f04bd6b3374eb3de501ca1-0.
INFO 03-02 00:02:29 [logger.py:42] Received request cmpl-a00eae49da3143638d89d00a2d6c7a1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:29 [async_llm.py:261] Added request cmpl-a00eae49da3143638d89d00a2d6c7a1b-0.
INFO 03-02 00:02:30 [logger.py:42] Received request cmpl-db44d8dcc359456baaaa40968f3694a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:30 [async_llm.py:261] Added request cmpl-db44d8dcc359456baaaa40968f3694a0-0.
INFO 03-02 00:02:31 [logger.py:42] Received request cmpl-903b75f213584ae0bc0178ee4ba2418f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:31 [async_llm.py:261] Added request cmpl-903b75f213584ae0bc0178ee4ba2418f-0.
INFO 03-02 00:02:33 [logger.py:42] Received request cmpl-b2cc890eba714e968534725e7a304800-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:33 [async_llm.py:261] Added request cmpl-b2cc890eba714e968534725e7a304800-0.
INFO 03-02 00:02:34 [logger.py:42] Received request cmpl-669c301cb7ad4534806799f1b1beb962-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:34 [async_llm.py:261] Added request cmpl-669c301cb7ad4534806799f1b1beb962-0.
INFO 03-02 00:02:35 [logger.py:42] Received request cmpl-968ce1c9e2254dae900924341d641457-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:35 [async_llm.py:261] Added request cmpl-968ce1c9e2254dae900924341d641457-0.
INFO 03-02 00:02:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:02:36 [logger.py:42] Received request cmpl-5a6193a335994881bdf3955b8431f3f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:36 [async_llm.py:261] Added request cmpl-5a6193a335994881bdf3955b8431f3f4-0.
INFO 03-02 00:02:37 [logger.py:42] Received request cmpl-21454452988e4ad99fa29fccb853a5ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:37 [async_llm.py:261] Added request cmpl-21454452988e4ad99fa29fccb853a5ed-0.
INFO 03-02 00:02:38 [logger.py:42] Received request cmpl-fe920ed567de493c9003b3adedfcddd0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:38 [async_llm.py:261] Added request cmpl-fe920ed567de493c9003b3adedfcddd0-0.
INFO 03-02 00:02:39 [logger.py:42] Received request cmpl-6c759ff887db4396a2fca1fbece8cfda-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:39 [async_llm.py:261] Added request cmpl-6c759ff887db4396a2fca1fbece8cfda-0.
INFO 03-02 00:02:40 [logger.py:42] Received request cmpl-7d837f53bfa942828658635c606a394f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:40 [async_llm.py:261] Added request cmpl-7d837f53bfa942828658635c606a394f-0.
INFO 03-02 00:02:41 [logger.py:42] Received request cmpl-6d4d0f2d9b074a099454267dc52d04d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:41 [async_llm.py:261] Added request cmpl-6d4d0f2d9b074a099454267dc52d04d9-0.
INFO 03-02 00:02:42 [logger.py:42] Received request cmpl-6f498b83f8c64889bf5a5c2a70b11804-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:42 [async_llm.py:261] Added request cmpl-6f498b83f8c64889bf5a5c2a70b11804-0.
INFO 03-02 00:02:43 [logger.py:42] Received request cmpl-38dc6d4bb56e4a52959d162e6226ced6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:43 [async_llm.py:261] Added request cmpl-38dc6d4bb56e4a52959d162e6226ced6-0.
INFO 03-02 00:02:44 [logger.py:42] Received request cmpl-173edbe78d964a22b79ffeecbf38654a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:44 [async_llm.py:261] Added request cmpl-173edbe78d964a22b79ffeecbf38654a-0.
INFO 03-02 00:02:46 [logger.py:42] Received request cmpl-138b0200e97c41b886f39ac6e2e2740f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:46 [async_llm.py:261] Added request cmpl-138b0200e97c41b886f39ac6e2e2740f-0.
INFO 03-02 00:02:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:02:47 [logger.py:42] Received request cmpl-86ba677341b6476cb6ae5313e8a765aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:47 [async_llm.py:261] Added request cmpl-86ba677341b6476cb6ae5313e8a765aa-0.
INFO 03-02 00:02:48 [logger.py:42] Received request cmpl-6a35e320050246588b9e4559c6180933-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:48 [async_llm.py:261] Added request cmpl-6a35e320050246588b9e4559c6180933-0.
INFO 03-02 00:02:49 [logger.py:42] Received request cmpl-46a371c6929c4e3caaa765da07c75eae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:49 [async_llm.py:261] Added request cmpl-46a371c6929c4e3caaa765da07c75eae-0.
INFO 03-02 00:02:50 [logger.py:42] Received request cmpl-06889134144c495e83ae1dc02bde979c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:50 [async_llm.py:261] Added request cmpl-06889134144c495e83ae1dc02bde979c-0.
INFO 03-02 00:02:51 [logger.py:42] Received request cmpl-2d7f223161294b1198b15adb68e77f54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:51 [async_llm.py:261] Added request cmpl-2d7f223161294b1198b15adb68e77f54-0.
INFO 03-02 00:02:52 [logger.py:42] Received request cmpl-6b2f60221f3746f184f72c123f1e86ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:52 [async_llm.py:261] Added request cmpl-6b2f60221f3746f184f72c123f1e86ad-0.
INFO 03-02 00:02:53 [logger.py:42] Received request cmpl-81d2a5bd247f4c92af777ce0d54dd342-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:53 [async_llm.py:261] Added request cmpl-81d2a5bd247f4c92af777ce0d54dd342-0.
INFO 03-02 00:02:54 [logger.py:42] Received request cmpl-2df05fd3f7274c8a94c58c7710eb467b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:54 [async_llm.py:261] Added request cmpl-2df05fd3f7274c8a94c58c7710eb467b-0.
INFO 03-02 00:02:55 [logger.py:42] Received request cmpl-0298b368d2c94836b40f92f125eb09c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:55 [async_llm.py:261] Added request cmpl-0298b368d2c94836b40f92f125eb09c7-0.
INFO 03-02 00:02:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:02:56 [logger.py:42] Received request cmpl-a55e74b7e44d4cc7873a5e313307475a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:56 [async_llm.py:261] Added request cmpl-a55e74b7e44d4cc7873a5e313307475a-0.
INFO 03-02 00:02:57 [logger.py:42] Received request cmpl-12f0064d012b41d5ab2aba2ac8751d42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:57 [async_llm.py:261] Added request cmpl-12f0064d012b41d5ab2aba2ac8751d42-0.
INFO 03-02 00:02:59 [logger.py:42] Received request cmpl-fd9697f914be4c0c8e8b9ff419019d7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:59 [async_llm.py:261] Added request cmpl-fd9697f914be4c0c8e8b9ff419019d7b-0.
INFO 03-02 00:03:00 [logger.py:42] Received request cmpl-9e4dfea98aef42098e6ffa2a7f29d720-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:00 [async_llm.py:261] Added request cmpl-9e4dfea98aef42098e6ffa2a7f29d720-0.
INFO 03-02 00:03:01 [logger.py:42] Received request cmpl-cbb515b021e94e43bb51e6d76d7dcb69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:01 [async_llm.py:261] Added request cmpl-cbb515b021e94e43bb51e6d76d7dcb69-0.
INFO 03-02 00:03:02 [logger.py:42] Received request cmpl-02bea2b7f9664f3ca7dc7677fcbbc4d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:02 [async_llm.py:261] Added request cmpl-02bea2b7f9664f3ca7dc7677fcbbc4d2-0.
INFO 03-02 00:03:03 [logger.py:42] Received request cmpl-40d079c3984c4ab6b58786211e249fb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:03 [async_llm.py:261] Added request cmpl-40d079c3984c4ab6b58786211e249fb8-0.
INFO 03-02 00:03:04 [logger.py:42] Received request cmpl-f26d982e2a4240b58032a094cc55ad5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:04 [async_llm.py:261] Added request cmpl-f26d982e2a4240b58032a094cc55ad5d-0.
INFO 03-02 00:03:05 [logger.py:42] Received request cmpl-0a42f1f23f864b1b9f48eb6fd1d08724-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:05 [async_llm.py:261] Added request cmpl-0a42f1f23f864b1b9f48eb6fd1d08724-0.
INFO 03-02 00:03:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:03:06 [logger.py:42] Received request cmpl-7cdd87cd1f914680b2878d147f00fb77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:06 [async_llm.py:261] Added request cmpl-7cdd87cd1f914680b2878d147f00fb77-0.
INFO 03-02 00:03:07 [logger.py:42] Received request cmpl-7dc9ec31074042839bfbca971e7c0bf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:07 [async_llm.py:261] Added request cmpl-7dc9ec31074042839bfbca971e7c0bf4-0.
INFO 03-02 00:03:08 [logger.py:42] Received request cmpl-a6a4b4f18e224368bd16f5750baad069-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:08 [async_llm.py:261] Added request cmpl-a6a4b4f18e224368bd16f5750baad069-0.
INFO 03-02 00:03:09 [logger.py:42] Received request cmpl-8e7d2834ff2041c3b8ad598ab67532b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:09 [async_llm.py:261] Added request cmpl-8e7d2834ff2041c3b8ad598ab67532b3-0.
INFO 03-02 00:03:10 [logger.py:42] Received request cmpl-68f2ff7215c64ee6afd5dc370604e1fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:10 [async_llm.py:261] Added request cmpl-68f2ff7215c64ee6afd5dc370604e1fd-0.
INFO 03-02 00:03:12 [logger.py:42] Received request cmpl-a09c4b0375374cb8861e1469a9ff2328-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:12 [async_llm.py:261] Added request cmpl-a09c4b0375374cb8861e1469a9ff2328-0.
INFO 03-02 00:03:13 [logger.py:42] Received request cmpl-97efe737382a4cae84f0b4fd406daff6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:13 [async_llm.py:261] Added request cmpl-97efe737382a4cae84f0b4fd406daff6-0.
INFO 03-02 00:03:14 [logger.py:42] Received request cmpl-117c5c7ad38d491f9a9613e2e6756778-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:14 [async_llm.py:261] Added request cmpl-117c5c7ad38d491f9a9613e2e6756778-0.
INFO 03-02 00:03:15 [logger.py:42] Received request cmpl-468f0c42b0bc49fbb4bf16f792ac66c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:15 [async_llm.py:261] Added request cmpl-468f0c42b0bc49fbb4bf16f792ac66c2-0.
INFO 03-02 00:03:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:03:16 [logger.py:42] Received request cmpl-f4b608a6338340698a0ad8e389987fab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:16 [async_llm.py:261] Added request cmpl-f4b608a6338340698a0ad8e389987fab-0.
INFO 03-02 00:03:17 [logger.py:42] Received request cmpl-d0ad145d10d74bad8df98dfbbd21b511-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:17 [async_llm.py:261] Added request cmpl-d0ad145d10d74bad8df98dfbbd21b511-0.
INFO 03-02 00:03:18 [logger.py:42] Received request cmpl-669f2dc2254b4b0ea4e4b19e80383d18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:18 [async_llm.py:261] Added request cmpl-669f2dc2254b4b0ea4e4b19e80383d18-0.
INFO 03-02 00:03:19 [logger.py:42] Received request cmpl-666c37693cf543fa9e4c2154f1ecd7d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:19 [async_llm.py:261] Added request cmpl-666c37693cf543fa9e4c2154f1ecd7d7-0.
INFO 03-02 00:03:20 [logger.py:42] Received request cmpl-6cf18c275e204f4c94261d6803baab37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:20 [async_llm.py:261] Added request cmpl-6cf18c275e204f4c94261d6803baab37-0.
INFO 03-02 00:03:21 [logger.py:42] Received request cmpl-f0f7fad3fd574e11bfb355f62befa625-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:21 [async_llm.py:261] Added request cmpl-f0f7fad3fd574e11bfb355f62befa625-0.
INFO 03-02 00:03:22 [logger.py:42] Received request cmpl-4657e841a4f54f44b7ac981d38eb3211-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:22 [async_llm.py:261] Added request cmpl-4657e841a4f54f44b7ac981d38eb3211-0.
INFO 03-02 00:03:23 [logger.py:42] Received request cmpl-f8639c81e43246e5823d5ad7ea3093bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:23 [async_llm.py:261] Added request cmpl-f8639c81e43246e5823d5ad7ea3093bd-0.
INFO 03-02 00:03:25 [logger.py:42] Received request cmpl-7695ab630bc346588614afba6042f9d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:25 [async_llm.py:261] Added request cmpl-7695ab630bc346588614afba6042f9d4-0.
INFO 03-02 00:03:26 [logger.py:42] Received request cmpl-280e7571517a448988cfe5a5950ad643-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:26 [async_llm.py:261] Added request cmpl-280e7571517a448988cfe5a5950ad643-0.
INFO 03-02 00:03:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:03:27 [logger.py:42] Received request cmpl-366eef1596544e3cae81c7ca188def8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:27 [async_llm.py:261] Added request cmpl-366eef1596544e3cae81c7ca188def8f-0.
INFO 03-02 00:03:28 [logger.py:42] Received request cmpl-cd49f75f0969492a87e4cf54b7e5202c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:28 [async_llm.py:261] Added request cmpl-cd49f75f0969492a87e4cf54b7e5202c-0.
INFO 03-02 00:03:29 [logger.py:42] Received request cmpl-dc1a8fd8c2de48679e06b9a875a753f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:29 [async_llm.py:261] Added request cmpl-dc1a8fd8c2de48679e06b9a875a753f5-0.
INFO 03-02 00:03:30 [logger.py:42] Received request cmpl-2db9b402e0d64bb1a4b1f89356612e21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:30 [async_llm.py:261] Added request cmpl-2db9b402e0d64bb1a4b1f89356612e21-0.
INFO 03-02 00:03:31 [logger.py:42] Received request cmpl-c5b6aa806a254fd99fb69eb3a4faed2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:31 [async_llm.py:261] Added request cmpl-c5b6aa806a254fd99fb69eb3a4faed2a-0.
INFO 03-02 00:03:32 [logger.py:42] Received request cmpl-d9fe1509c2ae4f48aa6b36450861c7e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:32 [async_llm.py:261] Added request cmpl-d9fe1509c2ae4f48aa6b36450861c7e1-0.
INFO 03-02 00:03:33 [logger.py:42] Received request cmpl-2c6924b808d5449494bc24c1ac308f7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:33 [async_llm.py:261] Added request cmpl-2c6924b808d5449494bc24c1ac308f7e-0.
INFO 03-02 00:03:34 [logger.py:42] Received request cmpl-52654a727bb341a1a91df26af2cac4de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:34 [async_llm.py:261] Added request cmpl-52654a727bb341a1a91df26af2cac4de-0.
INFO 03-02 00:03:35 [logger.py:42] Received request cmpl-a2bd1b8b162441aabd5bf5de4e9827af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:35 [async_llm.py:261] Added request cmpl-a2bd1b8b162441aabd5bf5de4e9827af-0.
INFO 03-02 00:03:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:03:36 [logger.py:42] Received request cmpl-fec4d4269b5147dfa535581cc79b2b9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:36 [async_llm.py:261] Added request cmpl-fec4d4269b5147dfa535581cc79b2b9b-0.
INFO 03-02 00:03:38 [logger.py:42] Received request cmpl-6673de38787d4eacbeb4963b4b462f01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:38 [async_llm.py:261] Added request cmpl-6673de38787d4eacbeb4963b4b462f01-0.
INFO 03-02 00:03:39 [logger.py:42] Received request cmpl-e2a630ae9365448c918e2673594f007b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:39 [async_llm.py:261] Added request cmpl-e2a630ae9365448c918e2673594f007b-0.
INFO 03-02 00:03:40 [logger.py:42] Received request cmpl-055ec7fbbf754470ae14459dfc058575-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:40 [async_llm.py:261] Added request cmpl-055ec7fbbf754470ae14459dfc058575-0.
INFO 03-02 00:03:41 [logger.py:42] Received request cmpl-0a1cde9b53714be79dc32de07da04668-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:41 [async_llm.py:261] Added request cmpl-0a1cde9b53714be79dc32de07da04668-0.
INFO 03-02 00:03:42 [logger.py:42] Received request cmpl-9d876434c6864819849ff1c108ee6729-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:42 [async_llm.py:261] Added request cmpl-9d876434c6864819849ff1c108ee6729-0.
INFO 03-02 00:03:43 [logger.py:42] Received request cmpl-8972bd6beafe4faebfe5f05f5ee6f77c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:43 [async_llm.py:261] Added request cmpl-8972bd6beafe4faebfe5f05f5ee6f77c-0.
INFO 03-02 00:03:44 [logger.py:42] Received request cmpl-c843eaa706bc49908e59838bf0e810b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:44 [async_llm.py:261] Added request cmpl-c843eaa706bc49908e59838bf0e810b0-0.
INFO 03-02 00:03:45 [logger.py:42] Received request cmpl-d7abf1b6cd3f4ca9acd7f527667f7979-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:45 [async_llm.py:261] Added request cmpl-d7abf1b6cd3f4ca9acd7f527667f7979-0.
INFO 03-02 00:03:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:03:46 [logger.py:42] Received request cmpl-5d3e328a3ef3420b81c488d262e4841a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:46 [async_llm.py:261] Added request cmpl-5d3e328a3ef3420b81c488d262e4841a-0.
INFO 03-02 00:03:47 [logger.py:42] Received request cmpl-b132e4469ccb4850865951358ea767c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:47 [async_llm.py:261] Added request cmpl-b132e4469ccb4850865951358ea767c3-0.
INFO 03-02 00:03:48 [logger.py:42] Received request cmpl-e7444110cf6040c6a1a084e3272c231a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:48 [async_llm.py:261] Added request cmpl-e7444110cf6040c6a1a084e3272c231a-0.
INFO 03-02 00:03:49 [logger.py:42] Received request cmpl-fe5b2774ffa94c6f91629d5200ba66a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:49 [async_llm.py:261] Added request cmpl-fe5b2774ffa94c6f91629d5200ba66a2-0.
INFO 03-02 00:03:51 [logger.py:42] Received request cmpl-7964e18e1bef45ef8dbf01e39ab8b023-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:51 [async_llm.py:261] Added request cmpl-7964e18e1bef45ef8dbf01e39ab8b023-0.
INFO 03-02 00:03:52 [logger.py:42] Received request cmpl-76f52c40a8e941f8b58b4ff3606587d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:52 [async_llm.py:261] Added request cmpl-76f52c40a8e941f8b58b4ff3606587d8-0.
INFO 03-02 00:03:53 [logger.py:42] Received request cmpl-6ae90c323fbc4244a2a5a17f366ef0d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:53 [async_llm.py:261] Added request cmpl-6ae90c323fbc4244a2a5a17f366ef0d5-0.
INFO 03-02 00:03:54 [logger.py:42] Received request cmpl-abeb89aeb5fd46359100ed72151a16ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:54 [async_llm.py:261] Added request cmpl-abeb89aeb5fd46359100ed72151a16ef-0.
INFO 03-02 00:03:55 [logger.py:42] Received request cmpl-65f51ca110e0466ca9ba79bb5595a2cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:55 [async_llm.py:261] Added request cmpl-65f51ca110e0466ca9ba79bb5595a2cc-0.
INFO 03-02 00:03:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:03:56 [logger.py:42] Received request cmpl-93a72c9be95e4baca83c2c7182bc6986-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:56 [async_llm.py:261] Added request cmpl-93a72c9be95e4baca83c2c7182bc6986-0.
INFO 03-02 00:03:57 [logger.py:42] Received request cmpl-68b4385379b5469eb31b0c4e43bf3399-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:57 [async_llm.py:261] Added request cmpl-68b4385379b5469eb31b0c4e43bf3399-0.
INFO 03-02 00:03:58 [logger.py:42] Received request cmpl-dbcca2d248f74d41b9780ba76a7e00e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:58 [async_llm.py:261] Added request cmpl-dbcca2d248f74d41b9780ba76a7e00e6-0.
INFO 03-02 00:03:59 [logger.py:42] Received request cmpl-65bc4873009449779b8c182af5d7f00b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:59 [async_llm.py:261] Added request cmpl-65bc4873009449779b8c182af5d7f00b-0.
INFO 03-02 00:04:00 [logger.py:42] Received request cmpl-461a064104554111aca202353a443887-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:00 [async_llm.py:261] Added request cmpl-461a064104554111aca202353a443887-0.
INFO 03-02 00:04:01 [logger.py:42] Received request cmpl-3d972cb578134d86a87ad8264ea4ae14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:01 [async_llm.py:261] Added request cmpl-3d972cb578134d86a87ad8264ea4ae14-0.
INFO 03-02 00:04:02 [logger.py:42] Received request cmpl-a5a6cc005fac436d901d0fff31226aee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:02 [async_llm.py:261] Added request cmpl-a5a6cc005fac436d901d0fff31226aee-0.
INFO 03-02 00:04:04 [logger.py:42] Received request cmpl-20ffb4525ec241579650a04b65f106c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:04 [async_llm.py:261] Added request cmpl-20ffb4525ec241579650a04b65f106c0-0.
INFO 03-02 00:04:05 [logger.py:42] Received request cmpl-e5da5715d6ef467ea44dc46eb38a67eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:05 [async_llm.py:261] Added request cmpl-e5da5715d6ef467ea44dc46eb38a67eb-0.
INFO 03-02 00:04:06 [logger.py:42] Received request cmpl-08b7c277d0d64b76975b6baab77367e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:06 [async_llm.py:261] Added request cmpl-08b7c277d0d64b76975b6baab77367e8-0.
INFO 03-02 00:04:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:04:07 [logger.py:42] Received request cmpl-87c46a2c7e5e46eaa180035e2337899a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:07 [async_llm.py:261] Added request cmpl-87c46a2c7e5e46eaa180035e2337899a-0.
INFO 03-02 00:04:08 [logger.py:42] Received request cmpl-28c031d72f1f42e79d39d3fd8cd25852-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:08 [async_llm.py:261] Added request cmpl-28c031d72f1f42e79d39d3fd8cd25852-0.
INFO 03-02 00:04:09 [logger.py:42] Received request cmpl-db89ef2227be47f1a06e93294c47547d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:09 [async_llm.py:261] Added request cmpl-db89ef2227be47f1a06e93294c47547d-0.
INFO 03-02 00:04:10 [logger.py:42] Received request cmpl-5773cb205ab948aa91b3161989799533-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:10 [async_llm.py:261] Added request cmpl-5773cb205ab948aa91b3161989799533-0.
INFO 03-02 00:04:11 [logger.py:42] Received request cmpl-110333663aa44a8fa2c14bb956c43a9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:11 [async_llm.py:261] Added request cmpl-110333663aa44a8fa2c14bb956c43a9d-0.
INFO 03-02 00:04:12 [logger.py:42] Received request cmpl-55769b862a3041b2b82ce613e098ced6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:12 [async_llm.py:261] Added request cmpl-55769b862a3041b2b82ce613e098ced6-0.
INFO 03-02 00:04:13 [logger.py:42] Received request cmpl-9d9cd9c4528a41a0a60c9cd56b02523a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:13 [async_llm.py:261] Added request cmpl-9d9cd9c4528a41a0a60c9cd56b02523a-0.
INFO 03-02 00:04:14 [logger.py:42] Received request cmpl-22477170beab4e2983937a10e7fea0f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:14 [async_llm.py:261] Added request cmpl-22477170beab4e2983937a10e7fea0f0-0.
INFO 03-02 00:04:15 [logger.py:42] Received request cmpl-4a3731bdd147442fa00464e5e502493d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:15 [async_llm.py:261] Added request cmpl-4a3731bdd147442fa00464e5e502493d-0.
INFO 03-02 00:04:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:04:17 [logger.py:42] Received request cmpl-02a2d83129be442d833c3284c1f2ab99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:17 [async_llm.py:261] Added request cmpl-02a2d83129be442d833c3284c1f2ab99-0.
INFO 03-02 00:04:18 [logger.py:42] Received request cmpl-1561a9e7424542dca9514c104156e73e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:18 [async_llm.py:261] Added request cmpl-1561a9e7424542dca9514c104156e73e-0.
INFO 03-02 00:04:19 [logger.py:42] Received request cmpl-976dea41b90c411b8f45f86a75c03996-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:19 [async_llm.py:261] Added request cmpl-976dea41b90c411b8f45f86a75c03996-0.
INFO 03-02 00:04:20 [logger.py:42] Received request cmpl-7941b7f633884760ac1e662654867a17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:20 [async_llm.py:261] Added request cmpl-7941b7f633884760ac1e662654867a17-0.
INFO 03-02 00:04:21 [logger.py:42] Received request cmpl-ac5e47100c354fd39b1d72db4de26db0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:21 [async_llm.py:261] Added request cmpl-ac5e47100c354fd39b1d72db4de26db0-0.
INFO 03-02 00:04:22 [logger.py:42] Received request cmpl-ddc708806547424782d7c20e3bbd3da8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:22 [async_llm.py:261] Added request cmpl-ddc708806547424782d7c20e3bbd3da8-0.
INFO 03-02 00:04:23 [logger.py:42] Received request cmpl-27bc655368c9496892dd373a824a19a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:23 [async_llm.py:261] Added request cmpl-27bc655368c9496892dd373a824a19a2-0.
INFO 03-02 00:04:24 [logger.py:42] Received request cmpl-e7a58f03f7864a92a80f3dfa9eb3068e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:24 [async_llm.py:261] Added request cmpl-e7a58f03f7864a92a80f3dfa9eb3068e-0.
INFO 03-02 00:04:25 [logger.py:42] Received request cmpl-b34d7dc169614faa90c4c3f15e6976f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:25 [async_llm.py:261] Added request cmpl-b34d7dc169614faa90c4c3f15e6976f7-0.
INFO 03-02 00:04:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:04:26 [logger.py:42] Received request cmpl-037016ad532e48ef8518b9364b24fa4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:26 [async_llm.py:261] Added request cmpl-037016ad532e48ef8518b9364b24fa4f-0.
INFO 03-02 00:04:27 [logger.py:42] Received request cmpl-17f1169d78964c048499ea517fa34bf7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:27 [async_llm.py:261] Added request cmpl-17f1169d78964c048499ea517fa34bf7-0.
INFO 03-02 00:04:28 [logger.py:42] Received request cmpl-4f2cd372329249adb81dcef7f76a1e9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:28 [async_llm.py:261] Added request cmpl-4f2cd372329249adb81dcef7f76a1e9d-0.
INFO 03-02 00:04:30 [logger.py:42] Received request cmpl-8ea8ca7cdd92409d99529b9ca2da7b92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:30 [async_llm.py:261] Added request cmpl-8ea8ca7cdd92409d99529b9ca2da7b92-0.
INFO 03-02 00:04:31 [logger.py:42] Received request cmpl-71a32f0d60c740eabbb3de0fe583f0e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:31 [async_llm.py:261] Added request cmpl-71a32f0d60c740eabbb3de0fe583f0e6-0.
INFO 03-02 00:04:32 [logger.py:42] Received request cmpl-434a3c0808dc4f298767e73649e08c95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:32 [async_llm.py:261] Added request cmpl-434a3c0808dc4f298767e73649e08c95-0.
INFO 03-02 00:04:33 [logger.py:42] Received request cmpl-0f2fbc0476994c62af3db5ab13f31168-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:33 [async_llm.py:261] Added request cmpl-0f2fbc0476994c62af3db5ab13f31168-0.
INFO 03-02 00:04:34 [logger.py:42] Received request cmpl-98af6798316a4612bf689dfc13a7635d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:34 [async_llm.py:261] Added request cmpl-98af6798316a4612bf689dfc13a7635d-0.
INFO 03-02 00:04:35 [logger.py:42] Received request cmpl-4a481a4f960b471ca2e194d0a3957979-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:35 [async_llm.py:261] Added request cmpl-4a481a4f960b471ca2e194d0a3957979-0.
INFO 03-02 00:04:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:04:36 [logger.py:42] Received request cmpl-881500807ad14bea98b7ee5d1e18eb7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:36 [async_llm.py:261] Added request cmpl-881500807ad14bea98b7ee5d1e18eb7c-0.
INFO 03-02 00:04:37 [logger.py:42] Received request cmpl-1dd220546b05481894031eb2b522e7c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:37 [async_llm.py:261] Added request cmpl-1dd220546b05481894031eb2b522e7c3-0.
INFO 03-02 00:04:38 [logger.py:42] Received request cmpl-41c7aaa0178e41d3837040fa023bfe14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:38 [async_llm.py:261] Added request cmpl-41c7aaa0178e41d3837040fa023bfe14-0.
INFO 03-02 00:04:39 [logger.py:42] Received request cmpl-a7a9eab8d4e046cebc3a5087431d0319-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:39 [async_llm.py:261] Added request cmpl-a7a9eab8d4e046cebc3a5087431d0319-0.
INFO 03-02 00:04:40 [logger.py:42] Received request cmpl-2c2e064f2c7246329506927db69d4b42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:40 [async_llm.py:261] Added request cmpl-2c2e064f2c7246329506927db69d4b42-0.
INFO 03-02 00:04:41 [logger.py:42] Received request cmpl-3d0f003e4816466e9b1f0ad305a6c296-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:41 [async_llm.py:261] Added request cmpl-3d0f003e4816466e9b1f0ad305a6c296-0.
INFO 03-02 00:04:43 [logger.py:42] Received request cmpl-dd2dfff2e66c48c59cb8724adf46d9de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:43 [async_llm.py:261] Added request cmpl-dd2dfff2e66c48c59cb8724adf46d9de-0.
INFO 03-02 00:04:44 [logger.py:42] Received request cmpl-b024fba149ff4b52bdcc3e7ae531de05-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:44 [async_llm.py:261] Added request cmpl-b024fba149ff4b52bdcc3e7ae531de05-0.
INFO 03-02 00:04:45 [logger.py:42] Received request cmpl-d8f687f5529448b49333cb079b07d5b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:45 [async_llm.py:261] Added request cmpl-d8f687f5529448b49333cb079b07d5b8-0.
INFO 03-02 00:04:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:04:46 [logger.py:42] Received request cmpl-5f360a45af6a463387f66368bc1da6a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:46 [async_llm.py:261] Added request cmpl-5f360a45af6a463387f66368bc1da6a0-0.
INFO 03-02 00:04:47 [logger.py:42] Received request cmpl-6eef667d56fa4719827bd1de652f58a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:47 [async_llm.py:261] Added request cmpl-6eef667d56fa4719827bd1de652f58a3-0.
INFO 03-02 00:04:48 [logger.py:42] Received request cmpl-97966dd189384045a703eb2782d50d37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:48 [async_llm.py:261] Added request cmpl-97966dd189384045a703eb2782d50d37-0.
INFO 03-02 00:04:49 [logger.py:42] Received request cmpl-b4c70935034142b4a27dc4c2de506d81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:49 [async_llm.py:261] Added request cmpl-b4c70935034142b4a27dc4c2de506d81-0.
INFO 03-02 00:04:50 [logger.py:42] Received request cmpl-96ea110d2a2a4f2f932b0359ab061fd9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:50 [async_llm.py:261] Added request cmpl-96ea110d2a2a4f2f932b0359ab061fd9-0.
INFO 03-02 00:04:51 [logger.py:42] Received request cmpl-8fccd46e93fe4c5aa096ae248e7c12c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:51 [async_llm.py:261] Added request cmpl-8fccd46e93fe4c5aa096ae248e7c12c6-0.
INFO 03-02 00:04:52 [logger.py:42] Received request cmpl-b1efb4fff05c41c5a3065f9f997e67ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:52 [async_llm.py:261] Added request cmpl-b1efb4fff05c41c5a3065f9f997e67ad-0.
INFO 03-02 00:04:53 [logger.py:42] Received request cmpl-d402516fda11451298293c06d8d53261-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:53 [async_llm.py:261] Added request cmpl-d402516fda11451298293c06d8d53261-0.
INFO 03-02 00:04:54 [logger.py:42] Received request cmpl-1bc57d4f1f824e22b0993c84379e6d6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:54 [async_llm.py:261] Added request cmpl-1bc57d4f1f824e22b0993c84379e6d6b-0.
INFO 03-02 00:04:56 [logger.py:42] Received request cmpl-a5a637804f404cb09ab5444031ad7f39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:56 [async_llm.py:261] Added request cmpl-a5a637804f404cb09ab5444031ad7f39-0.
INFO 03-02 00:04:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:04:57 [logger.py:42] Received request cmpl-7efc710a38e148749cf2b018318f529b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:57 [async_llm.py:261] Added request cmpl-7efc710a38e148749cf2b018318f529b-0.
INFO 03-02 00:04:58 [logger.py:42] Received request cmpl-f25e0bce48d649b4b2f2e4e37cbcd9da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:58 [async_llm.py:261] Added request cmpl-f25e0bce48d649b4b2f2e4e37cbcd9da-0.
INFO 03-02 00:04:59 [logger.py:42] Received request cmpl-ac15cf9925f34106a70ef96d709c9457-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:59 [async_llm.py:261] Added request cmpl-ac15cf9925f34106a70ef96d709c9457-0.
INFO 03-02 00:05:00 [logger.py:42] Received request cmpl-321481a943774ac3aff634706dda1a78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:00 [async_llm.py:261] Added request cmpl-321481a943774ac3aff634706dda1a78-0.
INFO 03-02 00:05:01 [logger.py:42] Received request cmpl-1a1ce2664d6c4ad584adf133a3441cd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:01 [async_llm.py:261] Added request cmpl-1a1ce2664d6c4ad584adf133a3441cd8-0.
INFO 03-02 00:05:02 [logger.py:42] Received request cmpl-d81feacdfd6144fb9d12d6326f298866-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:02 [async_llm.py:261] Added request cmpl-d81feacdfd6144fb9d12d6326f298866-0.
INFO 03-02 00:05:03 [logger.py:42] Received request cmpl-a839a129de0b48f89fc6aab3c67e74b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:03 [async_llm.py:261] Added request cmpl-a839a129de0b48f89fc6aab3c67e74b5-0.
INFO 03-02 00:05:04 [logger.py:42] Received request cmpl-97fbe2b823ae4f4bbb1d35c4512e4692-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:04 [async_llm.py:261] Added request cmpl-97fbe2b823ae4f4bbb1d35c4512e4692-0.
INFO 03-02 00:05:05 [logger.py:42] Received request cmpl-c4b7deadcfa04966bbb1fde44d760bf1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:05 [async_llm.py:261] Added request cmpl-c4b7deadcfa04966bbb1fde44d760bf1-0.
INFO 03-02 00:05:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:05:06 [logger.py:42] Received request cmpl-37106e36c92244728a478386fb59edc5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:06 [async_llm.py:261] Added request cmpl-37106e36c92244728a478386fb59edc5-0.
INFO 03-02 00:05:07 [logger.py:42] Received request cmpl-121531581ebc4f7ca3b4526163ab5305-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:07 [async_llm.py:261] Added request cmpl-121531581ebc4f7ca3b4526163ab5305-0.
INFO 03-02 00:05:09 [logger.py:42] Received request cmpl-ea852d571607406eaaefe7c0bdaed53a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:09 [async_llm.py:261] Added request cmpl-ea852d571607406eaaefe7c0bdaed53a-0.
INFO 03-02 00:05:10 [logger.py:42] Received request cmpl-d4c7e0b444c04c9c8281791aab5c012c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:10 [async_llm.py:261] Added request cmpl-d4c7e0b444c04c9c8281791aab5c012c-0.
INFO 03-02 00:05:11 [logger.py:42] Received request cmpl-65f874386a43485888b422459c2f722a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:11 [async_llm.py:261] Added request cmpl-65f874386a43485888b422459c2f722a-0.
INFO 03-02 00:05:12 [logger.py:42] Received request cmpl-6b78753e81eb41aa8b6b02a657665be9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:12 [async_llm.py:261] Added request cmpl-6b78753e81eb41aa8b6b02a657665be9-0.
INFO 03-02 00:05:13 [logger.py:42] Received request cmpl-287a384d16d643bfb44a088dc17b7332-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:13 [async_llm.py:261] Added request cmpl-287a384d16d643bfb44a088dc17b7332-0.
INFO 03-02 00:05:14 [logger.py:42] Received request cmpl-571d3824825f4567aa2e59b4eda6aed0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:14 [async_llm.py:261] Added request cmpl-571d3824825f4567aa2e59b4eda6aed0-0.
INFO 03-02 00:05:15 [logger.py:42] Received request cmpl-bcc9834deec243b09e8ac84b98dfec7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:15 [async_llm.py:261] Added request cmpl-bcc9834deec243b09e8ac84b98dfec7c-0.
INFO 03-02 00:05:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:05:16 [logger.py:42] Received request cmpl-b7cb83d474264df2b36871861db5726e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:16 [async_llm.py:261] Added request cmpl-b7cb83d474264df2b36871861db5726e-0.
INFO 03-02 00:05:17 [logger.py:42] Received request cmpl-5d8324d254534947a30052902d56f0ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:17 [async_llm.py:261] Added request cmpl-5d8324d254534947a30052902d56f0ea-0.
INFO 03-02 00:05:18 [logger.py:42] Received request cmpl-9f8e88472cc7457c8df489f3528f6a13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:18 [async_llm.py:261] Added request cmpl-9f8e88472cc7457c8df489f3528f6a13-0.
INFO 03-02 00:05:19 [logger.py:42] Received request cmpl-7cae7d756f4f48f883fa5a1eb46aabe9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:19 [async_llm.py:261] Added request cmpl-7cae7d756f4f48f883fa5a1eb46aabe9-0.
INFO 03-02 00:05:20 [logger.py:42] Received request cmpl-8fa8257df4194fa187af561185ba5fd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:20 [async_llm.py:261] Added request cmpl-8fa8257df4194fa187af561185ba5fd8-0.
INFO 03-02 00:05:22 [logger.py:42] Received request cmpl-d79996f42fb64cab8716314a77aad8bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:22 [async_llm.py:261] Added request cmpl-d79996f42fb64cab8716314a77aad8bd-0.
INFO 03-02 00:05:23 [logger.py:42] Received request cmpl-1dd7c461fbe34a74a00ab0fe36cb3f91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:23 [async_llm.py:261] Added request cmpl-1dd7c461fbe34a74a00ab0fe36cb3f91-0.
INFO 03-02 00:05:24 [logger.py:42] Received request cmpl-68618ce99e6344d69fa4cfa82cf21b16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:24 [async_llm.py:261] Added request cmpl-68618ce99e6344d69fa4cfa82cf21b16-0.
INFO 03-02 00:05:25 [logger.py:42] Received request cmpl-862533303c0349ba8ae3749524113b51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:25 [async_llm.py:261] Added request cmpl-862533303c0349ba8ae3749524113b51-0.
INFO 03-02 00:05:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:05:26 [logger.py:42] Received request cmpl-e565ae1104a24fabb16fdb7d02317e7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:26 [async_llm.py:261] Added request cmpl-e565ae1104a24fabb16fdb7d02317e7f-0.
INFO 03-02 00:05:27 [logger.py:42] Received request cmpl-208990a4c9b7416da3b8d78468dc45cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:27 [async_llm.py:261] Added request cmpl-208990a4c9b7416da3b8d78468dc45cb-0.
INFO 03-02 00:05:28 [logger.py:42] Received request cmpl-ae3ffd328bf04008a25839a839caf4a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:28 [async_llm.py:261] Added request cmpl-ae3ffd328bf04008a25839a839caf4a3-0.
INFO 03-02 00:05:29 [logger.py:42] Received request cmpl-009f250f6d9e4979bf072185025ab8f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:29 [async_llm.py:261] Added request cmpl-009f250f6d9e4979bf072185025ab8f3-0.
INFO 03-02 00:05:30 [logger.py:42] Received request cmpl-f0d41fdabc3344fb84f0cd6e198cc148-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:30 [async_llm.py:261] Added request cmpl-f0d41fdabc3344fb84f0cd6e198cc148-0.
INFO 03-02 00:05:31 [logger.py:42] Received request cmpl-b3d07239fbd047f18bc8453065925121-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:31 [async_llm.py:261] Added request cmpl-b3d07239fbd047f18bc8453065925121-0.
INFO 03-02 00:05:32 [logger.py:42] Received request cmpl-33500f455f2643b396b8e6f0cbd328db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:32 [async_llm.py:261] Added request cmpl-33500f455f2643b396b8e6f0cbd328db-0.
INFO 03-02 00:05:33 [logger.py:42] Received request cmpl-c5f504f8ceee4c0a973b1c72a0a30c1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:33 [async_llm.py:261] Added request cmpl-c5f504f8ceee4c0a973b1c72a0a30c1e-0.
INFO 03-02 00:05:35 [logger.py:42] Received request cmpl-a26867d863a34bf99f6235e8cbf12f8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:35 [async_llm.py:261] Added request cmpl-a26867d863a34bf99f6235e8cbf12f8c-0.
INFO 03-02 00:05:36 [logger.py:42] Received request cmpl-6974c25ae39945bf84879f6e1ea471e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:36 [async_llm.py:261] Added request cmpl-6974c25ae39945bf84879f6e1ea471e7-0.
INFO 03-02 00:05:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:05:37 [logger.py:42] Received request cmpl-9eca3c9b37a84d09ae1e291fcf4f1b47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:37 [async_llm.py:261] Added request cmpl-9eca3c9b37a84d09ae1e291fcf4f1b47-0.
INFO 03-02 00:05:38 [logger.py:42] Received request cmpl-284622b4126d4f3e90efd03e7f797060-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:38 [async_llm.py:261] Added request cmpl-284622b4126d4f3e90efd03e7f797060-0.
INFO 03-02 00:05:39 [logger.py:42] Received request cmpl-406599d08ddf4ec7a8d0ddb6cc6fe1bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:39 [async_llm.py:261] Added request cmpl-406599d08ddf4ec7a8d0ddb6cc6fe1bb-0.
INFO 03-02 00:05:40 [logger.py:42] Received request cmpl-f8deb8bed6bf4fd3aee4a850eeb4d73e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:40 [async_llm.py:261] Added request cmpl-f8deb8bed6bf4fd3aee4a850eeb4d73e-0.
INFO 03-02 00:05:41 [logger.py:42] Received request cmpl-1eb6a324d5a54333aa802c79ee3f7a87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:41 [async_llm.py:261] Added request cmpl-1eb6a324d5a54333aa802c79ee3f7a87-0.
INFO 03-02 00:05:42 [logger.py:42] Received request cmpl-faac343565824adea6878de1602db37b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:42 [async_llm.py:261] Added request cmpl-faac343565824adea6878de1602db37b-0.
INFO 03-02 00:05:43 [logger.py:42] Received request cmpl-7a3e60818050464db8e7c494d579acb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:43 [async_llm.py:261] Added request cmpl-7a3e60818050464db8e7c494d579acb8-0.
INFO 03-02 00:05:44 [logger.py:42] Received request cmpl-2b3ab6fbef784f558a9f488b670642eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:44 [async_llm.py:261] Added request cmpl-2b3ab6fbef784f558a9f488b670642eb-0.
INFO 03-02 00:05:45 [logger.py:42] Received request cmpl-6b05d2dffae6453d952ece7e91decf5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:45 [async_llm.py:261] Added request cmpl-6b05d2dffae6453d952ece7e91decf5b-0.
INFO 03-02 00:05:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:05:46 [logger.py:42] Received request cmpl-d3e0102545494c8ba28f6eeee3c86a59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:46 [async_llm.py:261] Added request cmpl-d3e0102545494c8ba28f6eeee3c86a59-0.
INFO 03-02 00:05:48 [logger.py:42] Received request cmpl-d653b85489af42bfbacb77e27c066bd7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:48 [async_llm.py:261] Added request cmpl-d653b85489af42bfbacb77e27c066bd7-0.
INFO 03-02 00:05:49 [logger.py:42] Received request cmpl-bc58850d99de4fe088e9557936aa65d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:49 [async_llm.py:261] Added request cmpl-bc58850d99de4fe088e9557936aa65d1-0.
INFO 03-02 00:05:50 [logger.py:42] Received request cmpl-2cbc3264e7d34927820dd8ea0d9aaccc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:50 [async_llm.py:261] Added request cmpl-2cbc3264e7d34927820dd8ea0d9aaccc-0.
INFO 03-02 00:05:51 [logger.py:42] Received request cmpl-238e22e353df40eba424022015df740a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:51 [async_llm.py:261] Added request cmpl-238e22e353df40eba424022015df740a-0.
INFO 03-02 00:05:52 [logger.py:42] Received request cmpl-0ab81c3ca7eb4ba3b86f5ede9cdd2e6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:52 [async_llm.py:261] Added request cmpl-0ab81c3ca7eb4ba3b86f5ede9cdd2e6a-0.
INFO 03-02 00:05:53 [logger.py:42] Received request cmpl-c0c30b207d604a8888ae3ac3b3f5300a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:53 [async_llm.py:261] Added request cmpl-c0c30b207d604a8888ae3ac3b3f5300a-0.
INFO 03-02 00:05:54 [logger.py:42] Received request cmpl-bf41bf8fb75245dcbc5f1609b0f66c08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:54 [async_llm.py:261] Added request cmpl-bf41bf8fb75245dcbc5f1609b0f66c08-0.
INFO 03-02 00:05:55 [logger.py:42] Received request cmpl-4f32b08a1469438dbc05987afe39e001-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:55 [async_llm.py:261] Added request cmpl-4f32b08a1469438dbc05987afe39e001-0.
INFO 03-02 00:05:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:05:56 [logger.py:42] Received request cmpl-0fdccfd285c745f187e5339dcf30e8e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:56 [async_llm.py:261] Added request cmpl-0fdccfd285c745f187e5339dcf30e8e6-0.
INFO 03-02 00:05:57 [logger.py:42] Received request cmpl-217fdf48b6c34f4297956acff2faeac0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:57 [async_llm.py:261] Added request cmpl-217fdf48b6c34f4297956acff2faeac0-0.
INFO 03-02 00:05:58 [logger.py:42] Received request cmpl-43f7edf469894e48a3f0368b31ba97d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:58 [async_llm.py:261] Added request cmpl-43f7edf469894e48a3f0368b31ba97d9-0.
INFO 03-02 00:05:59 [logger.py:42] Received request cmpl-b4ed7fbd81b0444a89e096c57438e351-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:59 [async_llm.py:261] Added request cmpl-b4ed7fbd81b0444a89e096c57438e351-0.
INFO 03-02 00:06:01 [logger.py:42] Received request cmpl-81eb0b27bb8a482496d2f7fdc102bcee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:01 [async_llm.py:261] Added request cmpl-81eb0b27bb8a482496d2f7fdc102bcee-0.
INFO 03-02 00:06:02 [logger.py:42] Received request cmpl-1269913481d04da8bb49f06ee24936f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:02 [async_llm.py:261] Added request cmpl-1269913481d04da8bb49f06ee24936f9-0.
INFO 03-02 00:06:03 [logger.py:42] Received request cmpl-26a805e9a31349709f1f9c6e5c07771d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:03 [async_llm.py:261] Added request cmpl-26a805e9a31349709f1f9c6e5c07771d-0.
INFO 03-02 00:06:04 [logger.py:42] Received request cmpl-0725de0da3c84d41b555747a8631c7f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:04 [async_llm.py:261] Added request cmpl-0725de0da3c84d41b555747a8631c7f9-0.
INFO 03-02 00:06:05 [logger.py:42] Received request cmpl-c1e7ee0b89814a0394d3e1392f9af4f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:05 [async_llm.py:261] Added request cmpl-c1e7ee0b89814a0394d3e1392f9af4f6-0.
INFO 03-02 00:06:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:06:06 [logger.py:42] Received request cmpl-3e5701fd6c784a878f726c4d251e167e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:06 [async_llm.py:261] Added request cmpl-3e5701fd6c784a878f726c4d251e167e-0.
INFO 03-02 00:06:07 [logger.py:42] Received request cmpl-f12e8fef904e48548911192ffa05096b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:07 [async_llm.py:261] Added request cmpl-f12e8fef904e48548911192ffa05096b-0.
INFO 03-02 00:06:08 [logger.py:42] Received request cmpl-eaf20087cf0d4f97b700287a828c1ecb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:08 [async_llm.py:261] Added request cmpl-eaf20087cf0d4f97b700287a828c1ecb-0.
INFO 03-02 00:06:09 [logger.py:42] Received request cmpl-d7e4d63cc13343b8ab9a20b8432aec93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:09 [async_llm.py:261] Added request cmpl-d7e4d63cc13343b8ab9a20b8432aec93-0.
INFO 03-02 00:06:10 [logger.py:42] Received request cmpl-cd55d47b6dae442bbdf2e75c84404339-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:10 [async_llm.py:261] Added request cmpl-cd55d47b6dae442bbdf2e75c84404339-0.
INFO 03-02 00:06:11 [logger.py:42] Received request cmpl-bfebe1e0675a4a3b9d4eb6a14e11cf00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:11 [async_llm.py:261] Added request cmpl-bfebe1e0675a4a3b9d4eb6a14e11cf00-0.
INFO 03-02 00:06:12 [logger.py:42] Received request cmpl-b4a7e322c6cf4ce1812505451ef2629a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:12 [async_llm.py:261] Added request cmpl-b4a7e322c6cf4ce1812505451ef2629a-0.
INFO 03-02 00:06:14 [logger.py:42] Received request cmpl-503d2f3c56484dac81cc2ff9d3c6810a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:14 [async_llm.py:261] Added request cmpl-503d2f3c56484dac81cc2ff9d3c6810a-0.
INFO 03-02 00:06:15 [logger.py:42] Received request cmpl-9cc94078dc554a15a8959302139118fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:15 [async_llm.py:261] Added request cmpl-9cc94078dc554a15a8959302139118fd-0.
INFO 03-02 00:06:16 [logger.py:42] Received request cmpl-6d257706477047e9b191fab6802b1807-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:16 [async_llm.py:261] Added request cmpl-6d257706477047e9b191fab6802b1807-0.
INFO 03-02 00:06:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:06:17 [logger.py:42] Received request cmpl-7dad6fdee89546bcab0e320f00a300e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:17 [async_llm.py:261] Added request cmpl-7dad6fdee89546bcab0e320f00a300e6-0.
INFO 03-02 00:06:18 [logger.py:42] Received request cmpl-ec3ecaedf1e14e9781e2cd39fa2f01ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:18 [async_llm.py:261] Added request cmpl-ec3ecaedf1e14e9781e2cd39fa2f01ba-0.
INFO 03-02 00:06:19 [logger.py:42] Received request cmpl-b16da05ccd65498c93ada51db99f25dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:19 [async_llm.py:261] Added request cmpl-b16da05ccd65498c93ada51db99f25dc-0.
INFO 03-02 00:06:20 [logger.py:42] Received request cmpl-119fdccb8b6a493b9c107fa96a83ff44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:20 [async_llm.py:261] Added request cmpl-119fdccb8b6a493b9c107fa96a83ff44-0.
INFO 03-02 00:06:21 [logger.py:42] Received request cmpl-957f18b681bf42058fd13d1eb01a522a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:21 [async_llm.py:261] Added request cmpl-957f18b681bf42058fd13d1eb01a522a-0.
INFO 03-02 00:06:22 [logger.py:42] Received request cmpl-668767b6a60042ee9dfa0a4d7a54a37a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:22 [async_llm.py:261] Added request cmpl-668767b6a60042ee9dfa0a4d7a54a37a-0.
INFO 03-02 00:06:23 [logger.py:42] Received request cmpl-0df8b529b5914748b5580f7ce43f6025-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:23 [async_llm.py:261] Added request cmpl-0df8b529b5914748b5580f7ce43f6025-0.
INFO 03-02 00:06:24 [logger.py:42] Received request cmpl-018acd5a26f34e32b6a34779fba2d56c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:24 [async_llm.py:261] Added request cmpl-018acd5a26f34e32b6a34779fba2d56c-0.
INFO 03-02 00:06:25 [logger.py:42] Received request cmpl-9c2bdb64950a4415830892923b1d3785-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:25 [async_llm.py:261] Added request cmpl-9c2bdb64950a4415830892923b1d3785-0.
INFO 03-02 00:06:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:06:27 [logger.py:42] Received request cmpl-b5a7db1160bb46fbb58e122acc3c0ede-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:27 [async_llm.py:261] Added request cmpl-b5a7db1160bb46fbb58e122acc3c0ede-0.
INFO 03-02 00:06:28 [logger.py:42] Received request cmpl-d16f57f8f4bc4acfb6049112eace54b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:28 [async_llm.py:261] Added request cmpl-d16f57f8f4bc4acfb6049112eace54b1-0.
INFO 03-02 00:06:29 [logger.py:42] Received request cmpl-fcf5f96178c14e719b8460b12cb943c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:29 [async_llm.py:261] Added request cmpl-fcf5f96178c14e719b8460b12cb943c9-0.
INFO 03-02 00:06:30 [logger.py:42] Received request cmpl-6dd20da590cf46eab59144e410af22fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:30 [async_llm.py:261] Added request cmpl-6dd20da590cf46eab59144e410af22fc-0.
INFO 03-02 00:06:31 [logger.py:42] Received request cmpl-2e6605868cba4adb907004dbb5d648e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:31 [async_llm.py:261] Added request cmpl-2e6605868cba4adb907004dbb5d648e4-0.
INFO 03-02 00:06:32 [logger.py:42] Received request cmpl-d6314a6f3cb141c78d0fdb525b78e8e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:32 [async_llm.py:261] Added request cmpl-d6314a6f3cb141c78d0fdb525b78e8e1-0.
INFO 03-02 00:06:33 [logger.py:42] Received request cmpl-c50ca0ae383345f087147881eff719b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:33 [async_llm.py:261] Added request cmpl-c50ca0ae383345f087147881eff719b3-0.
INFO 03-02 00:06:34 [logger.py:42] Received request cmpl-15773b7112084cdd9247423189a84167-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:34 [async_llm.py:261] Added request cmpl-15773b7112084cdd9247423189a84167-0.
INFO 03-02 00:06:35 [logger.py:42] Received request cmpl-90f8b05e3a914f4bb0c7e5bc6ee1a23f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:35 [async_llm.py:261] Added request cmpl-90f8b05e3a914f4bb0c7e5bc6ee1a23f-0.
INFO 03-02 00:06:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:06:36 [logger.py:42] Received request cmpl-907819a43d1145c4850904735168d151-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:36 [async_llm.py:261] Added request cmpl-907819a43d1145c4850904735168d151-0.
INFO 03-02 00:06:37 [logger.py:42] Received request cmpl-d6c3a8a9c88b49ea917458041eda1eaa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:37 [async_llm.py:261] Added request cmpl-d6c3a8a9c88b49ea917458041eda1eaa-0.
INFO 03-02 00:06:38 [logger.py:42] Received request cmpl-5ad6bfb1dc514657853b7ba93f759727-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:38 [async_llm.py:261] Added request cmpl-5ad6bfb1dc514657853b7ba93f759727-0.
INFO 03-02 00:06:40 [logger.py:42] Received request cmpl-d5c46ff47f60405fb3d8185b684ec81b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:40 [async_llm.py:261] Added request cmpl-d5c46ff47f60405fb3d8185b684ec81b-0.
INFO 03-02 00:06:41 [logger.py:42] Received request cmpl-6863e57fbe3f4e7d8c6aa4c0ee748e69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:41 [async_llm.py:261] Added request cmpl-6863e57fbe3f4e7d8c6aa4c0ee748e69-0.
INFO 03-02 00:06:42 [logger.py:42] Received request cmpl-5074ff0a3c0a4afdba656f0de8ef4413-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:42 [async_llm.py:261] Added request cmpl-5074ff0a3c0a4afdba656f0de8ef4413-0.
INFO 03-02 00:06:43 [logger.py:42] Received request cmpl-3680615e50a74c409af0b5d70cd13031-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:43 [async_llm.py:261] Added request cmpl-3680615e50a74c409af0b5d70cd13031-0.
INFO 03-02 00:06:44 [logger.py:42] Received request cmpl-84655a45913144ecbe9f6f59f30ded97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:44 [async_llm.py:261] Added request cmpl-84655a45913144ecbe9f6f59f30ded97-0.
INFO 03-02 00:06:45 [logger.py:42] Received request cmpl-2b3fcc5ca2934d87be846f41694c1222-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:45 [async_llm.py:261] Added request cmpl-2b3fcc5ca2934d87be846f41694c1222-0.
INFO 03-02 00:06:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:06:46 [logger.py:42] Received request cmpl-c6e5aef0f18e46028802ec89e8e7f0c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:46 [async_llm.py:261] Added request cmpl-c6e5aef0f18e46028802ec89e8e7f0c7-0.
INFO 03-02 00:06:47 [logger.py:42] Received request cmpl-368743ca89ea4a5c80d843f6327a3947-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:47 [async_llm.py:261] Added request cmpl-368743ca89ea4a5c80d843f6327a3947-0.
INFO 03-02 00:06:48 [logger.py:42] Received request cmpl-b068badc500f46f6ad1dbfa2ba4809b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:48 [async_llm.py:261] Added request cmpl-b068badc500f46f6ad1dbfa2ba4809b4-0.
INFO 03-02 00:06:49 [logger.py:42] Received request cmpl-39fbd875034445e786d85a815cea142f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:49 [async_llm.py:261] Added request cmpl-39fbd875034445e786d85a815cea142f-0.
INFO 03-02 00:06:50 [logger.py:42] Received request cmpl-cbc2a908a8ba45c6891592a2eaf5581a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:50 [async_llm.py:261] Added request cmpl-cbc2a908a8ba45c6891592a2eaf5581a-0.
INFO 03-02 00:06:51 [logger.py:42] Received request cmpl-cce82bd6649d491ba2ad3a25e432905a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:51 [async_llm.py:261] Added request cmpl-cce82bd6649d491ba2ad3a25e432905a-0.
INFO 03-02 00:06:53 [logger.py:42] Received request cmpl-8031cc25adc74d6f961d7ab2cc66c5f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:53 [async_llm.py:261] Added request cmpl-8031cc25adc74d6f961d7ab2cc66c5f9-0.
INFO 03-02 00:06:54 [logger.py:42] Received request cmpl-cfe753c7ba334396a1641edab5fa9cd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:54 [async_llm.py:261] Added request cmpl-cfe753c7ba334396a1641edab5fa9cd8-0.
INFO 03-02 00:06:55 [logger.py:42] Received request cmpl-1ce74a69fc7f4824a2b4a4e3f55ffa5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:55 [async_llm.py:261] Added request cmpl-1ce74a69fc7f4824a2b4a4e3f55ffa5e-0.
INFO 03-02 00:06:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:06:56 [logger.py:42] Received request cmpl-82f2c46d2a844ba9a252dedf82fd9a82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:56 [async_llm.py:261] Added request cmpl-82f2c46d2a844ba9a252dedf82fd9a82-0.
INFO 03-02 00:06:57 [logger.py:42] Received request cmpl-6a66317e632a400bb34b47078dc1abc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:57 [async_llm.py:261] Added request cmpl-6a66317e632a400bb34b47078dc1abc3-0.
INFO 03-02 00:06:58 [logger.py:42] Received request cmpl-36c60e83c1504eb9b9c62cdf3d418915-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:58 [async_llm.py:261] Added request cmpl-36c60e83c1504eb9b9c62cdf3d418915-0.
INFO 03-02 00:06:59 [logger.py:42] Received request cmpl-245f82f261dd4c72a2107a6dcafd5f97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:59 [async_llm.py:261] Added request cmpl-245f82f261dd4c72a2107a6dcafd5f97-0.
INFO 03-02 00:07:00 [logger.py:42] Received request cmpl-21a9d84e657c4ba7873990a32fa7d4fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:00 [async_llm.py:261] Added request cmpl-21a9d84e657c4ba7873990a32fa7d4fc-0.
INFO 03-02 00:07:01 [logger.py:42] Received request cmpl-0630be9394f34ad987ae4f3e1d41856e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:01 [async_llm.py:261] Added request cmpl-0630be9394f34ad987ae4f3e1d41856e-0.
INFO 03-02 00:07:02 [logger.py:42] Received request cmpl-8ae1d7e055d3478db0b5ad96b6467fa8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:02 [async_llm.py:261] Added request cmpl-8ae1d7e055d3478db0b5ad96b6467fa8-0.
INFO 03-02 00:07:03 [logger.py:42] Received request cmpl-a1a6b991ba964202a4a2052a00ef31cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:03 [async_llm.py:261] Added request cmpl-a1a6b991ba964202a4a2052a00ef31cb-0.
INFO 03-02 00:07:04 [logger.py:42] Received request cmpl-c5c48721d36b42cb8916a6707a59d8dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:04 [async_llm.py:261] Added request cmpl-c5c48721d36b42cb8916a6707a59d8dc-0.
INFO 03-02 00:07:06 [logger.py:42] Received request cmpl-13ccdfb72a484758926dd91795e45059-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:06 [async_llm.py:261] Added request cmpl-13ccdfb72a484758926dd91795e45059-0.
INFO 03-02 00:07:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:07:07 [logger.py:42] Received request cmpl-85d51fafbb0b41e69f534a6da7e5aa2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:07 [async_llm.py:261] Added request cmpl-85d51fafbb0b41e69f534a6da7e5aa2a-0.
INFO 03-02 00:07:08 [logger.py:42] Received request cmpl-fec0dac69444471abcedbd667f1926b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:08 [async_llm.py:261] Added request cmpl-fec0dac69444471abcedbd667f1926b9-0.
INFO 03-02 00:07:09 [logger.py:42] Received request cmpl-c5b98a218ceb4e619bbbe8a0716f80f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:09 [async_llm.py:261] Added request cmpl-c5b98a218ceb4e619bbbe8a0716f80f0-0.
INFO 03-02 00:07:10 [logger.py:42] Received request cmpl-86feb7d3a88c432c82c321f5a5387f29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:10 [async_llm.py:261] Added request cmpl-86feb7d3a88c432c82c321f5a5387f29-0.
INFO 03-02 00:07:11 [logger.py:42] Received request cmpl-723952c276c1451992b85962cc471217-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:11 [async_llm.py:261] Added request cmpl-723952c276c1451992b85962cc471217-0.
INFO 03-02 00:07:12 [logger.py:42] Received request cmpl-3831c32012bf4217a3e4b942ff078b7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:12 [async_llm.py:261] Added request cmpl-3831c32012bf4217a3e4b942ff078b7e-0.
INFO 03-02 00:07:13 [logger.py:42] Received request cmpl-6f50908db5164a39af81fd433739fcca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:13 [async_llm.py:261] Added request cmpl-6f50908db5164a39af81fd433739fcca-0.
INFO 03-02 00:07:14 [logger.py:42] Received request cmpl-6769bfb5131e4d50929167eab8b300fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:14 [async_llm.py:261] Added request cmpl-6769bfb5131e4d50929167eab8b300fb-0.
INFO 03-02 00:07:15 [logger.py:42] Received request cmpl-c984972af7e847b6b51ded8ecff1dcf2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:15 [async_llm.py:261] Added request cmpl-c984972af7e847b6b51ded8ecff1dcf2-0.
INFO 03-02 00:07:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:07:16 [logger.py:42] Received request cmpl-53c86d1823e946d78050dfdcb44740e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:16 [async_llm.py:261] Added request cmpl-53c86d1823e946d78050dfdcb44740e2-0.
INFO 03-02 00:07:17 [logger.py:42] Received request cmpl-161a6b5f8cc041eab861742554e6d85b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:17 [async_llm.py:261] Added request cmpl-161a6b5f8cc041eab861742554e6d85b-0.
INFO 03-02 00:07:19 [logger.py:42] Received request cmpl-dce96b39aed846dba32b8908726c2509-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:19 [async_llm.py:261] Added request cmpl-dce96b39aed846dba32b8908726c2509-0.
INFO 03-02 00:07:20 [logger.py:42] Received request cmpl-0f269bb7088749579662a24efe559532-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:20 [async_llm.py:261] Added request cmpl-0f269bb7088749579662a24efe559532-0.
INFO 03-02 00:07:21 [logger.py:42] Received request cmpl-8f9cb7e3cede43548250d4e258788177-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:21 [async_llm.py:261] Added request cmpl-8f9cb7e3cede43548250d4e258788177-0.
INFO 03-02 00:07:22 [logger.py:42] Received request cmpl-4fdc2ca7b1a3484384f21a4e616c09e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:22 [async_llm.py:261] Added request cmpl-4fdc2ca7b1a3484384f21a4e616c09e2-0.
INFO 03-02 00:07:23 [logger.py:42] Received request cmpl-d12a85cc0c5d4907b91981f492c9c8bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:23 [async_llm.py:261] Added request cmpl-d12a85cc0c5d4907b91981f492c9c8bb-0.
INFO 03-02 00:07:24 [logger.py:42] Received request cmpl-c69dd78edbe94f1c9ea6335750f1fc3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:24 [async_llm.py:261] Added request cmpl-c69dd78edbe94f1c9ea6335750f1fc3a-0.
INFO 03-02 00:07:25 [logger.py:42] Received request cmpl-e441a73469a84a9880b58e5299663f64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:25 [async_llm.py:261] Added request cmpl-e441a73469a84a9880b58e5299663f64-0.
INFO 03-02 00:07:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:07:26 [logger.py:42] Received request cmpl-afe8b7c35c4c4a42b6bb01290c2ab4d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:26 [async_llm.py:261] Added request cmpl-afe8b7c35c4c4a42b6bb01290c2ab4d7-0.
INFO 03-02 00:07:27 [logger.py:42] Received request cmpl-b8c0601b4e3b4bcb91de424dc357ffb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:27 [async_llm.py:261] Added request cmpl-b8c0601b4e3b4bcb91de424dc357ffb8-0.
INFO 03-02 00:07:28 [logger.py:42] Received request cmpl-bcfeb72ec6dc47af8f96bc9ced143b96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:28 [async_llm.py:261] Added request cmpl-bcfeb72ec6dc47af8f96bc9ced143b96-0.
INFO 03-02 00:07:29 [logger.py:42] Received request cmpl-7097056b832a4d5297ca159984b0c64a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:29 [async_llm.py:261] Added request cmpl-7097056b832a4d5297ca159984b0c64a-0.
INFO 03-02 00:07:30 [logger.py:42] Received request cmpl-3cda679e57a34cb1b0a0084f60605360-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:30 [async_llm.py:261] Added request cmpl-3cda679e57a34cb1b0a0084f60605360-0.
INFO 03-02 00:07:32 [logger.py:42] Received request cmpl-4364a92fbd994deda2eec4bb4b0248a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:32 [async_llm.py:261] Added request cmpl-4364a92fbd994deda2eec4bb4b0248a1-0.
INFO 03-02 00:07:33 [logger.py:42] Received request cmpl-6d013a563ded4151aa0b5baf95932f72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:33 [async_llm.py:261] Added request cmpl-6d013a563ded4151aa0b5baf95932f72-0.
INFO 03-02 00:07:34 [logger.py:42] Received request cmpl-bbcec22531c84628a6feed670d254947-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:34 [async_llm.py:261] Added request cmpl-bbcec22531c84628a6feed670d254947-0.
INFO 03-02 00:07:35 [logger.py:42] Received request cmpl-4836354f65524755bf6735755a083bc6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:35 [async_llm.py:261] Added request cmpl-4836354f65524755bf6735755a083bc6-0.
INFO 03-02 00:07:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:07:36 [logger.py:42] Received request cmpl-9bffbc31855146caabd0db99bfb86e96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:36 [async_llm.py:261] Added request cmpl-9bffbc31855146caabd0db99bfb86e96-0.
INFO 03-02 00:07:37 [logger.py:42] Received request cmpl-6e3bbcf84d5b4885a90afd4ac7d52e84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:37 [async_llm.py:261] Added request cmpl-6e3bbcf84d5b4885a90afd4ac7d52e84-0.
INFO 03-02 00:07:38 [logger.py:42] Received request cmpl-b35a711786734d3da6f0c2c0a6975d7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:38 [async_llm.py:261] Added request cmpl-b35a711786734d3da6f0c2c0a6975d7f-0.
INFO 03-02 00:07:39 [logger.py:42] Received request cmpl-af3af29d811b4c2d84c360e3ab31d39e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:39 [async_llm.py:261] Added request cmpl-af3af29d811b4c2d84c360e3ab31d39e-0.
INFO 03-02 00:07:40 [logger.py:42] Received request cmpl-f8d753770acb4e45a9a254dbdaa9db21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:40 [async_llm.py:261] Added request cmpl-f8d753770acb4e45a9a254dbdaa9db21-0.
INFO 03-02 00:07:41 [logger.py:42] Received request cmpl-9b392081298f4e7899270a2fdc8ea466-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:41 [async_llm.py:261] Added request cmpl-9b392081298f4e7899270a2fdc8ea466-0.
INFO 03-02 00:07:42 [logger.py:42] Received request cmpl-76fb328abcc84f16b0a25999ea21fdc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:42 [async_llm.py:261] Added request cmpl-76fb328abcc84f16b0a25999ea21fdc7-0.
INFO 03-02 00:07:43 [logger.py:42] Received request cmpl-fe3c5893d278451fba49e6d4d6b55448-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:43 [async_llm.py:261] Added request cmpl-fe3c5893d278451fba49e6d4d6b55448-0.
INFO 03-02 00:07:45 [logger.py:42] Received request cmpl-897957a6d7894c7e96a2b1790d978c39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:45 [async_llm.py:261] Added request cmpl-897957a6d7894c7e96a2b1790d978c39-0.
INFO 03-02 00:07:46 [logger.py:42] Received request cmpl-dbecc38ce85149b092c511e107fc579d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:46 [async_llm.py:261] Added request cmpl-dbecc38ce85149b092c511e107fc579d-0.
INFO 03-02 00:07:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:07:47 [logger.py:42] Received request cmpl-697df8cf75704fe1adf81f2029deb6f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:47 [async_llm.py:261] Added request cmpl-697df8cf75704fe1adf81f2029deb6f9-0.
INFO 03-02 00:07:48 [logger.py:42] Received request cmpl-74452dc23d1448b8a321c5010a59d834-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:48 [async_llm.py:261] Added request cmpl-74452dc23d1448b8a321c5010a59d834-0.
INFO 03-02 00:07:49 [logger.py:42] Received request cmpl-e5eb2ec462014427a454f95900912c35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:49 [async_llm.py:261] Added request cmpl-e5eb2ec462014427a454f95900912c35-0.
INFO 03-02 00:07:50 [logger.py:42] Received request cmpl-8d119d39db9d4ef7ac42442155984b00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:50 [async_llm.py:261] Added request cmpl-8d119d39db9d4ef7ac42442155984b00-0.
INFO 03-02 00:07:51 [logger.py:42] Received request cmpl-d5a14bd881fe4f109360ce34762dbb81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:51 [async_llm.py:261] Added request cmpl-d5a14bd881fe4f109360ce34762dbb81-0.
INFO 03-02 00:07:52 [logger.py:42] Received request cmpl-9a47c4a1ee5d4bffb2f6c11c5e7b39b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:52 [async_llm.py:261] Added request cmpl-9a47c4a1ee5d4bffb2f6c11c5e7b39b8-0.
INFO 03-02 00:07:53 [logger.py:42] Received request cmpl-c03e661955754f5d83a7daf24d5c3b67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:53 [async_llm.py:261] Added request cmpl-c03e661955754f5d83a7daf24d5c3b67-0.
INFO 03-02 00:07:54 [logger.py:42] Received request cmpl-2f51a98d2a6d4e70b093f5da66229b61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:54 [async_llm.py:261] Added request cmpl-2f51a98d2a6d4e70b093f5da66229b61-0.
INFO 03-02 00:07:55 [logger.py:42] Received request cmpl-67d9a1c98b3441d286306d59b78fbbf3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:55 [async_llm.py:261] Added request cmpl-67d9a1c98b3441d286306d59b78fbbf3-0.
INFO 03-02 00:07:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:07:56 [logger.py:42] Received request cmpl-8a0c5dbdcbc9461fbf63460264b5d2ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:56 [async_llm.py:261] Added request cmpl-8a0c5dbdcbc9461fbf63460264b5d2ae-0.
INFO 03-02 00:07:58 [logger.py:42] Received request cmpl-df3d148218b341f0b54f47182cad19b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:58 [async_llm.py:261] Added request cmpl-df3d148218b341f0b54f47182cad19b6-0.
INFO 03-02 00:07:59 [logger.py:42] Received request cmpl-a1fce84a19364cff8d54eeac713207e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:59 [async_llm.py:261] Added request cmpl-a1fce84a19364cff8d54eeac713207e1-0.
INFO 03-02 00:08:00 [logger.py:42] Received request cmpl-360a5f11cd534425aaab58584341fd7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:00 [async_llm.py:261] Added request cmpl-360a5f11cd534425aaab58584341fd7c-0.
INFO 03-02 00:08:01 [logger.py:42] Received request cmpl-5c24279343fe492b8576b725cd370ad6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:01 [async_llm.py:261] Added request cmpl-5c24279343fe492b8576b725cd370ad6-0.
INFO 03-02 00:08:02 [logger.py:42] Received request cmpl-e46496bcb5974182b2893ea39c9e158f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:02 [async_llm.py:261] Added request cmpl-e46496bcb5974182b2893ea39c9e158f-0.
INFO 03-02 00:08:03 [logger.py:42] Received request cmpl-622f7c280c61424fb6d8bd64db1d1802-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:03 [async_llm.py:261] Added request cmpl-622f7c280c61424fb6d8bd64db1d1802-0.
INFO 03-02 00:08:04 [logger.py:42] Received request cmpl-1ab10b816afe483ca4c05e65bf95a238-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:04 [async_llm.py:261] Added request cmpl-1ab10b816afe483ca4c05e65bf95a238-0.
INFO 03-02 00:08:05 [logger.py:42] Received request cmpl-9c29c9275380461083185cc4e4949b89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:05 [async_llm.py:261] Added request cmpl-9c29c9275380461083185cc4e4949b89-0.
INFO 03-02 00:08:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:08:06 [logger.py:42] Received request cmpl-3c8b873c587b418c80d077e49102d16b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:06 [async_llm.py:261] Added request cmpl-3c8b873c587b418c80d077e49102d16b-0.
INFO 03-02 00:08:07 [logger.py:42] Received request cmpl-0ef22cb1f03140d1958d76653ab3e26c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:07 [async_llm.py:261] Added request cmpl-0ef22cb1f03140d1958d76653ab3e26c-0.
INFO 03-02 00:08:08 [logger.py:42] Received request cmpl-afaaa8950fbf43a9ba6a91ad4241cd2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:08 [async_llm.py:261] Added request cmpl-afaaa8950fbf43a9ba6a91ad4241cd2d-0.
INFO 03-02 00:08:09 [logger.py:42] Received request cmpl-31ed1e52c7a048919f9a1b435333aab2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:09 [async_llm.py:261] Added request cmpl-31ed1e52c7a048919f9a1b435333aab2-0.
INFO 03-02 00:08:11 [logger.py:42] Received request cmpl-bd313902d5274800badea2245e2caa6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:11 [async_llm.py:261] Added request cmpl-bd313902d5274800badea2245e2caa6d-0.
INFO 03-02 00:08:12 [logger.py:42] Received request cmpl-f4f7163b3ee24cdf962c414fed042d96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:12 [async_llm.py:261] Added request cmpl-f4f7163b3ee24cdf962c414fed042d96-0.
INFO 03-02 00:08:13 [logger.py:42] Received request cmpl-743e5e2a908d4e81a60b6b5a25508652-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:13 [async_llm.py:261] Added request cmpl-743e5e2a908d4e81a60b6b5a25508652-0.
INFO 03-02 00:08:14 [logger.py:42] Received request cmpl-023ac8ad34b04d43a5d492e0de6bb8fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:14 [async_llm.py:261] Added request cmpl-023ac8ad34b04d43a5d492e0de6bb8fc-0.
INFO 03-02 00:08:15 [logger.py:42] Received request cmpl-cf815bae8e12413c8b0c0044735b3293-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:15 [async_llm.py:261] Added request cmpl-cf815bae8e12413c8b0c0044735b3293-0.
INFO 03-02 00:08:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:08:16 [logger.py:42] Received request cmpl-aecd675645c048ce8e8a07c179309dbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:16 [async_llm.py:261] Added request cmpl-aecd675645c048ce8e8a07c179309dbd-0.
INFO 03-02 00:08:17 [logger.py:42] Received request cmpl-c380b0817e5b4c8aa97737fec3f9f99e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:17 [async_llm.py:261] Added request cmpl-c380b0817e5b4c8aa97737fec3f9f99e-0.
INFO 03-02 00:08:18 [logger.py:42] Received request cmpl-03ae0c186c914cba8a0db303a6c2f0bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:18 [async_llm.py:261] Added request cmpl-03ae0c186c914cba8a0db303a6c2f0bf-0.
INFO 03-02 00:08:19 [logger.py:42] Received request cmpl-2486bf1aa8d24ee18e4c2cb5bf98856f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:19 [async_llm.py:261] Added request cmpl-2486bf1aa8d24ee18e4c2cb5bf98856f-0.
INFO 03-02 00:08:20 [logger.py:42] Received request cmpl-45b00089a44c483ead17f9cc4f6f0fc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:20 [async_llm.py:261] Added request cmpl-45b00089a44c483ead17f9cc4f6f0fc9-0.
INFO 03-02 00:08:21 [logger.py:42] Received request cmpl-569a472486d64e6fac154727d8b5bea2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:21 [async_llm.py:261] Added request cmpl-569a472486d64e6fac154727d8b5bea2-0.
INFO 03-02 00:08:22 [logger.py:42] Received request cmpl-d76716ac286a4957948fbfc5756cf27f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:22 [async_llm.py:261] Added request cmpl-d76716ac286a4957948fbfc5756cf27f-0.
INFO 03-02 00:08:24 [logger.py:42] Received request cmpl-9304c7a5a1144daa8864ae52a388e3a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:24 [async_llm.py:261] Added request cmpl-9304c7a5a1144daa8864ae52a388e3a8-0.
INFO 03-02 00:08:25 [logger.py:42] Received request cmpl-59bb18d102914098b9acc1e808dbf435-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:25 [async_llm.py:261] Added request cmpl-59bb18d102914098b9acc1e808dbf435-0.
INFO 03-02 00:08:26 [logger.py:42] Received request cmpl-86ba9f1b01f14b2dbaec404e09c7060d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:26 [async_llm.py:261] Added request cmpl-86ba9f1b01f14b2dbaec404e09c7060d-0.
INFO 03-02 00:08:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:08:27 [logger.py:42] Received request cmpl-6e983b590e774caab90e66bcdfde1fae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:27 [async_llm.py:261] Added request cmpl-6e983b590e774caab90e66bcdfde1fae-0.
INFO 03-02 00:08:28 [logger.py:42] Received request cmpl-f01bc07ab6cc4025b238f69f4c42f01e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:28 [async_llm.py:261] Added request cmpl-f01bc07ab6cc4025b238f69f4c42f01e-0.
INFO 03-02 00:08:29 [logger.py:42] Received request cmpl-a6657af649ce40ad82ef8a671b696b9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:29 [async_llm.py:261] Added request cmpl-a6657af649ce40ad82ef8a671b696b9c-0.
INFO 03-02 00:08:30 [logger.py:42] Received request cmpl-a1d453a90bc84ba49305aba68c973b0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:30 [async_llm.py:261] Added request cmpl-a1d453a90bc84ba49305aba68c973b0b-0.
INFO 03-02 00:08:31 [logger.py:42] Received request cmpl-23c5dcfdbbc241d2a4fd18f838434561-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:31 [async_llm.py:261] Added request cmpl-23c5dcfdbbc241d2a4fd18f838434561-0.
INFO 03-02 00:08:32 [logger.py:42] Received request cmpl-8ef2d46f1d1546acbba804c69677a39f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:32 [async_llm.py:261] Added request cmpl-8ef2d46f1d1546acbba804c69677a39f-0.
INFO 03-02 00:08:33 [logger.py:42] Received request cmpl-e51d87ce235044ed8130f4ebc73e31ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:33 [async_llm.py:261] Added request cmpl-e51d87ce235044ed8130f4ebc73e31ca-0.
INFO 03-02 00:08:34 [logger.py:42] Received request cmpl-67d039d5b0414ed887cbd3dc7ea14a73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:34 [async_llm.py:261] Added request cmpl-67d039d5b0414ed887cbd3dc7ea14a73-0.
INFO 03-02 00:08:35 [logger.py:42] Received request cmpl-324b6288b5344b05a76ae4d3629d1c32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:35 [async_llm.py:261] Added request cmpl-324b6288b5344b05a76ae4d3629d1c32-0.
INFO 03-02 00:08:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:08:37 [logger.py:42] Received request cmpl-79371e11b45d415196e03cde664d571e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:37 [async_llm.py:261] Added request cmpl-79371e11b45d415196e03cde664d571e-0.
INFO 03-02 00:08:38 [logger.py:42] Received request cmpl-eefceebcb48c4c048ecdd5c83b68874c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:38 [async_llm.py:261] Added request cmpl-eefceebcb48c4c048ecdd5c83b68874c-0.
INFO 03-02 00:08:39 [logger.py:42] Received request cmpl-d907b2c0b4cc4635bb77ab6a599bd5c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:39 [async_llm.py:261] Added request cmpl-d907b2c0b4cc4635bb77ab6a599bd5c6-0.
INFO 03-02 00:08:40 [logger.py:42] Received request cmpl-a9657ed8956c48dcb482d4cae2bc1d93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:40 [async_llm.py:261] Added request cmpl-a9657ed8956c48dcb482d4cae2bc1d93-0.
INFO 03-02 00:08:41 [logger.py:42] Received request cmpl-eb689dbd9be74172bd06e26008fa0772-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:41 [async_llm.py:261] Added request cmpl-eb689dbd9be74172bd06e26008fa0772-0.
INFO 03-02 00:08:42 [logger.py:42] Received request cmpl-41c8cd7ebdf94ab4bd9997e681c219e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:42 [async_llm.py:261] Added request cmpl-41c8cd7ebdf94ab4bd9997e681c219e8-0.
INFO 03-02 00:08:43 [logger.py:42] Received request cmpl-7bd8eb0f5d9c442d89dac434e795764a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:43 [async_llm.py:261] Added request cmpl-7bd8eb0f5d9c442d89dac434e795764a-0.
INFO 03-02 00:08:44 [logger.py:42] Received request cmpl-fc14f5435b984c03b889b2c30c4e85bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:44 [async_llm.py:261] Added request cmpl-fc14f5435b984c03b889b2c30c4e85bc-0.
INFO 03-02 00:08:45 [logger.py:42] Received request cmpl-7ac286e826544faaa73bab2021fb4134-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:45 [async_llm.py:261] Added request cmpl-7ac286e826544faaa73bab2021fb4134-0.
INFO 03-02 00:08:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:08:46 [logger.py:42] Received request cmpl-30e92c5417484ede92f1013f5d784ad9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:46 [async_llm.py:261] Added request cmpl-30e92c5417484ede92f1013f5d784ad9-0.
INFO 03-02 00:08:47 [logger.py:42] Received request cmpl-59c7e05f066845c3b02ed07b57a27e15-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:47 [async_llm.py:261] Added request cmpl-59c7e05f066845c3b02ed07b57a27e15-0.
INFO 03-02 00:08:48 [logger.py:42] Received request cmpl-0522676d70874021a397dba6156cb391-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:48 [async_llm.py:261] Added request cmpl-0522676d70874021a397dba6156cb391-0.
INFO 03-02 00:08:50 [logger.py:42] Received request cmpl-62424dce9f6b482a959fa9a67c32b30c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:50 [async_llm.py:261] Added request cmpl-62424dce9f6b482a959fa9a67c32b30c-0.
INFO 03-02 00:08:51 [logger.py:42] Received request cmpl-df6f8258a09b4124a4b9528d07e5d145-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:51 [async_llm.py:261] Added request cmpl-df6f8258a09b4124a4b9528d07e5d145-0.
INFO 03-02 00:08:52 [logger.py:42] Received request cmpl-4c87bfcfaf754778a36f59258b165b76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:52 [async_llm.py:261] Added request cmpl-4c87bfcfaf754778a36f59258b165b76-0.
INFO 03-02 00:08:53 [logger.py:42] Received request cmpl-2b7202d3272949c984c976f9030ba109-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:53 [async_llm.py:261] Added request cmpl-2b7202d3272949c984c976f9030ba109-0.
INFO 03-02 00:08:54 [logger.py:42] Received request cmpl-3133ec7a0b4547f6ad745de994c4320f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:54 [async_llm.py:261] Added request cmpl-3133ec7a0b4547f6ad745de994c4320f-0.
INFO 03-02 00:08:55 [logger.py:42] Received request cmpl-ce0a32c6e19c4d67807396b6f2f5ab35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:55 [async_llm.py:261] Added request cmpl-ce0a32c6e19c4d67807396b6f2f5ab35-0.
INFO 03-02 00:08:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:08:56 [logger.py:42] Received request cmpl-2c5f586ea56b4eaf8a1305d0adda21a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:56 [async_llm.py:261] Added request cmpl-2c5f586ea56b4eaf8a1305d0adda21a5-0.
INFO 03-02 00:08:57 [logger.py:42] Received request cmpl-852f9fa89cea4543be15a2d71f7abc01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:57 [async_llm.py:261] Added request cmpl-852f9fa89cea4543be15a2d71f7abc01-0.
INFO 03-02 00:08:58 [logger.py:42] Received request cmpl-f8fb4f657c76472a97d6d02247379fac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:58 [async_llm.py:261] Added request cmpl-f8fb4f657c76472a97d6d02247379fac-0.
INFO 03-02 00:08:59 [logger.py:42] Received request cmpl-0591a8cd2fad465ca4d7743c686e7991-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:59 [async_llm.py:261] Added request cmpl-0591a8cd2fad465ca4d7743c686e7991-0.
INFO 03-02 00:09:00 [logger.py:42] Received request cmpl-33c280bf06024978aece22c976100cff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:00 [async_llm.py:261] Added request cmpl-33c280bf06024978aece22c976100cff-0.
INFO 03-02 00:09:01 [logger.py:42] Received request cmpl-9f345b5f86df419b973e982862dc2ccf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:01 [async_llm.py:261] Added request cmpl-9f345b5f86df419b973e982862dc2ccf-0.
INFO 03-02 00:09:03 [logger.py:42] Received request cmpl-2dc8380e491d4e2299019d5b8df4e0ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:03 [async_llm.py:261] Added request cmpl-2dc8380e491d4e2299019d5b8df4e0ab-0.
INFO 03-02 00:09:04 [logger.py:42] Received request cmpl-176de2fef50c407e8dd0c7b956bd563a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:04 [async_llm.py:261] Added request cmpl-176de2fef50c407e8dd0c7b956bd563a-0.
INFO 03-02 00:09:05 [logger.py:42] Received request cmpl-2526b304040c45859e0cbb74c41c97a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:05 [async_llm.py:261] Added request cmpl-2526b304040c45859e0cbb74c41c97a6-0.
INFO 03-02 00:09:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:09:06 [logger.py:42] Received request cmpl-1a7f1614f7ca4ba39376d6a3b8b9967e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:06 [async_llm.py:261] Added request cmpl-1a7f1614f7ca4ba39376d6a3b8b9967e-0.
INFO 03-02 00:09:07 [logger.py:42] Received request cmpl-9619c5ccd22943aa934eabc1ee870b33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:07 [async_llm.py:261] Added request cmpl-9619c5ccd22943aa934eabc1ee870b33-0.
INFO 03-02 00:09:08 [logger.py:42] Received request cmpl-cc4ef864bc3b435f826c379bebd92e13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:08 [async_llm.py:261] Added request cmpl-cc4ef864bc3b435f826c379bebd92e13-0.
INFO 03-02 00:09:09 [logger.py:42] Received request cmpl-ae3a295c7b884e749a0b0808cf0d738f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:09 [async_llm.py:261] Added request cmpl-ae3a295c7b884e749a0b0808cf0d738f-0.
INFO 03-02 00:09:10 [logger.py:42] Received request cmpl-74c460a7ecee4d9caaad3bf6ab13959c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:10 [async_llm.py:261] Added request cmpl-74c460a7ecee4d9caaad3bf6ab13959c-0.
INFO 03-02 00:09:11 [logger.py:42] Received request cmpl-d4da2641ca2342baadf2ad46c638fb32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:11 [async_llm.py:261] Added request cmpl-d4da2641ca2342baadf2ad46c638fb32-0.
INFO 03-02 00:09:12 [logger.py:42] Received request cmpl-63f7f16cb2c845079876bb46c2dedd26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:12 [async_llm.py:261] Added request cmpl-63f7f16cb2c845079876bb46c2dedd26-0.
INFO 03-02 00:09:13 [logger.py:42] Received request cmpl-917a437dc8044edc956dedae03c34cea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:13 [async_llm.py:261] Added request cmpl-917a437dc8044edc956dedae03c34cea-0.
INFO 03-02 00:09:14 [logger.py:42] Received request cmpl-fd66459e02ee4a6ca72bca9c2271890c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:14 [async_llm.py:261] Added request cmpl-fd66459e02ee4a6ca72bca9c2271890c-0.
INFO 03-02 00:09:16 [logger.py:42] Received request cmpl-1630390f34b44daab5a569b699b31fea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:16 [async_llm.py:261] Added request cmpl-1630390f34b44daab5a569b699b31fea-0.
INFO 03-02 00:09:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:09:17 [logger.py:42] Received request cmpl-98f90a7a9f4741b9b8e9b06512bec3a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:17 [async_llm.py:261] Added request cmpl-98f90a7a9f4741b9b8e9b06512bec3a1-0.
INFO 03-02 00:09:18 [logger.py:42] Received request cmpl-d3f80dcc34c745e58354b79c7326bffb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:18 [async_llm.py:261] Added request cmpl-d3f80dcc34c745e58354b79c7326bffb-0.
INFO 03-02 00:09:19 [logger.py:42] Received request cmpl-ec01e30980f54a30919b34173ce7eb8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:19 [async_llm.py:261] Added request cmpl-ec01e30980f54a30919b34173ce7eb8b-0.
INFO 03-02 00:09:20 [logger.py:42] Received request cmpl-08c6a621816649c88ba676bd532588d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:20 [async_llm.py:261] Added request cmpl-08c6a621816649c88ba676bd532588d1-0.
INFO 03-02 00:09:21 [logger.py:42] Received request cmpl-35ee969e727743869b51cd3fee336105-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:21 [async_llm.py:261] Added request cmpl-35ee969e727743869b51cd3fee336105-0.
INFO 03-02 00:09:22 [logger.py:42] Received request cmpl-9e481b13f7a747e6bc066929d1f2af97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:22 [async_llm.py:261] Added request cmpl-9e481b13f7a747e6bc066929d1f2af97-0.
INFO 03-02 00:09:23 [logger.py:42] Received request cmpl-ba74f99e52824cb99c8920b9d47bc92e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:23 [async_llm.py:261] Added request cmpl-ba74f99e52824cb99c8920b9d47bc92e-0.
INFO 03-02 00:09:24 [logger.py:42] Received request cmpl-be3c9b4ce6e740d989f6c199c8f39bfb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:24 [async_llm.py:261] Added request cmpl-be3c9b4ce6e740d989f6c199c8f39bfb-0.
INFO 03-02 00:09:25 [logger.py:42] Received request cmpl-1920e7ace2a94643b35672bda8bdc0a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:25 [async_llm.py:261] Added request cmpl-1920e7ace2a94643b35672bda8bdc0a5-0.
INFO 03-02 00:09:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:09:26 [logger.py:42] Received request cmpl-2ff46a3d9c254acc9525ff69721907d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:26 [async_llm.py:261] Added request cmpl-2ff46a3d9c254acc9525ff69721907d8-0.
INFO 03-02 00:09:27 [logger.py:42] Received request cmpl-1862e45c990346929f3c4c239b686a79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:27 [async_llm.py:261] Added request cmpl-1862e45c990346929f3c4c239b686a79-0.
INFO 03-02 00:09:29 [logger.py:42] Received request cmpl-1d5bacf4866a4e69aa0defc636c5def3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:29 [async_llm.py:261] Added request cmpl-1d5bacf4866a4e69aa0defc636c5def3-0.
INFO 03-02 00:09:30 [logger.py:42] Received request cmpl-39415f1e88584a3bb7e65842cafbcfee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:30 [async_llm.py:261] Added request cmpl-39415f1e88584a3bb7e65842cafbcfee-0.
INFO 03-02 00:09:31 [logger.py:42] Received request cmpl-d5bb10478d064aeda8ed1671c60afd8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:31 [async_llm.py:261] Added request cmpl-d5bb10478d064aeda8ed1671c60afd8e-0.
INFO 03-02 00:09:32 [logger.py:42] Received request cmpl-a06abf762fbe4a11a511d04671cab919-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:32 [async_llm.py:261] Added request cmpl-a06abf762fbe4a11a511d04671cab919-0.
INFO 03-02 00:09:33 [logger.py:42] Received request cmpl-52e4d30bd7d447a488abb9d7efae007f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:33 [async_llm.py:261] Added request cmpl-52e4d30bd7d447a488abb9d7efae007f-0.
INFO 03-02 00:09:34 [logger.py:42] Received request cmpl-0414d858514942c1b60fe4c51904ca56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:34 [async_llm.py:261] Added request cmpl-0414d858514942c1b60fe4c51904ca56-0.
INFO 03-02 00:09:35 [logger.py:42] Received request cmpl-23509463a3a442f4a84aad7bb5c448e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:35 [async_llm.py:261] Added request cmpl-23509463a3a442f4a84aad7bb5c448e9-0.
INFO 03-02 00:09:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:09:36 [logger.py:42] Received request cmpl-cee7a209e5024c2d86e53da2e9c32d48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:36 [async_llm.py:261] Added request cmpl-cee7a209e5024c2d86e53da2e9c32d48-0.
INFO 03-02 00:09:37 [logger.py:42] Received request cmpl-086323e053f8470a98032e2a526f47b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:37 [async_llm.py:261] Added request cmpl-086323e053f8470a98032e2a526f47b5-0.
INFO 03-02 00:09:38 [logger.py:42] Received request cmpl-186f34dd31f44c52accaba7ca6b09ee2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:38 [async_llm.py:261] Added request cmpl-186f34dd31f44c52accaba7ca6b09ee2-0.
INFO 03-02 00:09:39 [logger.py:42] Received request cmpl-d89e6deb21174fe6a1d4ba08ff4a617e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:39 [async_llm.py:261] Added request cmpl-d89e6deb21174fe6a1d4ba08ff4a617e-0.
INFO 03-02 00:09:40 [logger.py:42] Received request cmpl-2d17a7db4d594817903d63f5a292f1cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:40 [async_llm.py:261] Added request cmpl-2d17a7db4d594817903d63f5a292f1cc-0.
INFO 03-02 00:09:42 [logger.py:42] Received request cmpl-66147d13982b482883ba3f84bd226d68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:42 [async_llm.py:261] Added request cmpl-66147d13982b482883ba3f84bd226d68-0.
INFO 03-02 00:09:43 [logger.py:42] Received request cmpl-4de52d5045cb4bf88e87cb8dc8136406-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:43 [async_llm.py:261] Added request cmpl-4de52d5045cb4bf88e87cb8dc8136406-0.
INFO 03-02 00:09:44 [logger.py:42] Received request cmpl-0117514107284fd98daa16ea79328fef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:44 [async_llm.py:261] Added request cmpl-0117514107284fd98daa16ea79328fef-0.
INFO 03-02 00:09:45 [logger.py:42] Received request cmpl-1be8f67bdf294cfaa0e8149a23cf8d0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:45 [async_llm.py:261] Added request cmpl-1be8f67bdf294cfaa0e8149a23cf8d0d-0.
INFO 03-02 00:09:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:09:46 [logger.py:42] Received request cmpl-b250710ac0584e4eba2153a7fc809bae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:46 [async_llm.py:261] Added request cmpl-b250710ac0584e4eba2153a7fc809bae-0.
INFO 03-02 00:09:47 [logger.py:42] Received request cmpl-150725463a3e43f5be94dce2cf1c8181-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:47 [async_llm.py:261] Added request cmpl-150725463a3e43f5be94dce2cf1c8181-0.
INFO 03-02 00:09:48 [logger.py:42] Received request cmpl-a221873d05fb45c28c95b7f2c30ec7af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:48 [async_llm.py:261] Added request cmpl-a221873d05fb45c28c95b7f2c30ec7af-0.
INFO 03-02 00:09:49 [logger.py:42] Received request cmpl-02445a24551c4896bc12f57d6d68a5bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:49 [async_llm.py:261] Added request cmpl-02445a24551c4896bc12f57d6d68a5bc-0.
INFO 03-02 00:09:50 [logger.py:42] Received request cmpl-3871fbaea3374df0974cd8948bdfecfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:50 [async_llm.py:261] Added request cmpl-3871fbaea3374df0974cd8948bdfecfa-0.
INFO 03-02 00:09:51 [logger.py:42] Received request cmpl-80f8fb7d4bb94b9cbc85b07d3441f508-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:51 [async_llm.py:261] Added request cmpl-80f8fb7d4bb94b9cbc85b07d3441f508-0.
INFO 03-02 00:09:52 [logger.py:42] Received request cmpl-9028ab148fd144bd8f492efcbffdb3e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:52 [async_llm.py:261] Added request cmpl-9028ab148fd144bd8f492efcbffdb3e9-0.
INFO 03-02 00:09:53 [logger.py:42] Received request cmpl-4ea9e8308b5144ffa01c6c2c0f7565a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:53 [async_llm.py:261] Added request cmpl-4ea9e8308b5144ffa01c6c2c0f7565a9-0.
INFO 03-02 00:09:55 [logger.py:42] Received request cmpl-7ca5ed883c8245c3ac49c1d496d08a3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:55 [async_llm.py:261] Added request cmpl-7ca5ed883c8245c3ac49c1d496d08a3a-0.
INFO 03-02 00:09:56 [logger.py:42] Received request cmpl-0444760a726044feafc9f9fec40da92a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:56 [async_llm.py:261] Added request cmpl-0444760a726044feafc9f9fec40da92a-0.
INFO 03-02 00:09:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:09:57 [logger.py:42] Received request cmpl-771f0ac0ad034514acaabf71482cf502-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:57 [async_llm.py:261] Added request cmpl-771f0ac0ad034514acaabf71482cf502-0.
INFO 03-02 00:09:58 [logger.py:42] Received request cmpl-684cb10fcc3b436489f468fd03d29278-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:58 [async_llm.py:261] Added request cmpl-684cb10fcc3b436489f468fd03d29278-0.
INFO 03-02 00:09:59 [logger.py:42] Received request cmpl-e097e299c2434e078cb0d15e193cd1d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:59 [async_llm.py:261] Added request cmpl-e097e299c2434e078cb0d15e193cd1d2-0.
INFO 03-02 00:10:00 [logger.py:42] Received request cmpl-f24e239f47134d52b72b5caf8a6eeba5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:00 [async_llm.py:261] Added request cmpl-f24e239f47134d52b72b5caf8a6eeba5-0.
INFO 03-02 00:10:01 [logger.py:42] Received request cmpl-7a6932f5676b4789b339faa05e6c1943-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:01 [async_llm.py:261] Added request cmpl-7a6932f5676b4789b339faa05e6c1943-0.
INFO 03-02 00:10:02 [logger.py:42] Received request cmpl-f49f494eb6bc4a49a1f27b4f65e5ddd2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:02 [async_llm.py:261] Added request cmpl-f49f494eb6bc4a49a1f27b4f65e5ddd2-0.
INFO 03-02 00:10:03 [logger.py:42] Received request cmpl-290d671cd1d745658af621433caca064-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:03 [async_llm.py:261] Added request cmpl-290d671cd1d745658af621433caca064-0.
INFO 03-02 00:10:04 [logger.py:42] Received request cmpl-6cf2da6a9e694343a7c9759a80a80809-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:04 [async_llm.py:261] Added request cmpl-6cf2da6a9e694343a7c9759a80a80809-0.
INFO 03-02 00:10:05 [logger.py:42] Received request cmpl-97a5af3236bf4e2db15eaa7f4ee783f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:05 [async_llm.py:261] Added request cmpl-97a5af3236bf4e2db15eaa7f4ee783f5-0.
INFO 03-02 00:10:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:10:06 [logger.py:42] Received request cmpl-f4d0330ca75848f0ab492e9a0cc945a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:06 [async_llm.py:261] Added request cmpl-f4d0330ca75848f0ab492e9a0cc945a7-0.
INFO 03-02 00:10:08 [logger.py:42] Received request cmpl-b3c251d0f6914f519cd4a4d446c1d8cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:08 [async_llm.py:261] Added request cmpl-b3c251d0f6914f519cd4a4d446c1d8cf-0.
INFO 03-02 00:10:09 [logger.py:42] Received request cmpl-762eac55e10d400884bb18bdcc009321-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:09 [async_llm.py:261] Added request cmpl-762eac55e10d400884bb18bdcc009321-0.
INFO 03-02 00:10:10 [logger.py:42] Received request cmpl-75158f79dba049bcaa7cfcd6dd677996-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:10 [async_llm.py:261] Added request cmpl-75158f79dba049bcaa7cfcd6dd677996-0.
INFO 03-02 00:10:11 [logger.py:42] Received request cmpl-00e26da930eb4b53b413c2b3d634d809-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:11 [async_llm.py:261] Added request cmpl-00e26da930eb4b53b413c2b3d634d809-0.
INFO 03-02 00:10:12 [logger.py:42] Received request cmpl-8142de4487f4468c9971c36bdf6adcb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:12 [async_llm.py:261] Added request cmpl-8142de4487f4468c9971c36bdf6adcb2-0.
INFO 03-02 00:10:13 [logger.py:42] Received request cmpl-875a6b89d5424bf4b3b97ab26db926f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:13 [async_llm.py:261] Added request cmpl-875a6b89d5424bf4b3b97ab26db926f4-0.
INFO 03-02 00:10:14 [logger.py:42] Received request cmpl-4bb17bc45b554488a93ba93ca5a29608-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:14 [async_llm.py:261] Added request cmpl-4bb17bc45b554488a93ba93ca5a29608-0.
INFO 03-02 00:10:15 [logger.py:42] Received request cmpl-880150ff53944ad297eba694284aabc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:15 [async_llm.py:261] Added request cmpl-880150ff53944ad297eba694284aabc1-0.
INFO 03-02 00:10:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:10:16 [logger.py:42] Received request cmpl-3443cf81d2824746a5076acf43dad177-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:16 [async_llm.py:261] Added request cmpl-3443cf81d2824746a5076acf43dad177-0.
INFO 03-02 00:10:17 [logger.py:42] Received request cmpl-989891c578534ca89e9f60040d914d95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:17 [async_llm.py:261] Added request cmpl-989891c578534ca89e9f60040d914d95-0.
INFO 03-02 00:10:18 [logger.py:42] Received request cmpl-efb84fb906e64741bc468971e9baa056-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:18 [async_llm.py:261] Added request cmpl-efb84fb906e64741bc468971e9baa056-0.
INFO 03-02 00:10:19 [logger.py:42] Received request cmpl-30d532d232c2449a986660d8a31c106f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:19 [async_llm.py:261] Added request cmpl-30d532d232c2449a986660d8a31c106f-0.
INFO 03-02 00:10:21 [logger.py:42] Received request cmpl-9b74406263c343a196e5fc98381dcec5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:21 [async_llm.py:261] Added request cmpl-9b74406263c343a196e5fc98381dcec5-0.
INFO 03-02 00:10:22 [logger.py:42] Received request cmpl-1a41ae8378f840cd8a3064257b1289d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:22 [async_llm.py:261] Added request cmpl-1a41ae8378f840cd8a3064257b1289d8-0.
INFO 03-02 00:10:23 [logger.py:42] Received request cmpl-e243789d56c942be8da4469a71347c94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:23 [async_llm.py:261] Added request cmpl-e243789d56c942be8da4469a71347c94-0.
INFO 03-02 00:10:24 [logger.py:42] Received request cmpl-1983a8bf62274131a87263ec5c957637-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:24 [async_llm.py:261] Added request cmpl-1983a8bf62274131a87263ec5c957637-0.
INFO 03-02 00:10:25 [logger.py:42] Received request cmpl-1a926fb2b2d04bed9508fdbd8e259b29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:25 [async_llm.py:261] Added request cmpl-1a926fb2b2d04bed9508fdbd8e259b29-0.
INFO 03-02 00:10:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:10:26 [logger.py:42] Received request cmpl-0090c46cd57e491c8f350e103084a3bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:26 [async_llm.py:261] Added request cmpl-0090c46cd57e491c8f350e103084a3bb-0.
INFO 03-02 00:10:27 [logger.py:42] Received request cmpl-5a318ff0d6a143d9baa570e77158bf78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:27 [async_llm.py:261] Added request cmpl-5a318ff0d6a143d9baa570e77158bf78-0.
INFO 03-02 00:10:28 [logger.py:42] Received request cmpl-7d62c6db5d71467bbdb084c78318fb94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:28 [async_llm.py:261] Added request cmpl-7d62c6db5d71467bbdb084c78318fb94-0.
INFO 03-02 00:10:29 [logger.py:42] Received request cmpl-b1c2b54a26894c1b8d9be165efefe220-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:29 [async_llm.py:261] Added request cmpl-b1c2b54a26894c1b8d9be165efefe220-0.
INFO 03-02 00:10:30 [logger.py:42] Received request cmpl-762099a06816452f8dc829e8144f0e91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:30 [async_llm.py:261] Added request cmpl-762099a06816452f8dc829e8144f0e91-0.
INFO 03-02 00:10:31 [logger.py:42] Received request cmpl-8ca99e28fc4645a69591af1a9bfd9bb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:31 [async_llm.py:261] Added request cmpl-8ca99e28fc4645a69591af1a9bfd9bb3-0.
INFO 03-02 00:10:32 [logger.py:42] Received request cmpl-be6dd57ee80b4a6b9eae13b4b85306d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:32 [async_llm.py:261] Added request cmpl-be6dd57ee80b4a6b9eae13b4b85306d3-0.
INFO 03-02 00:10:34 [logger.py:42] Received request cmpl-ebcc43f485c6460db675244a4ab6bde7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:34 [async_llm.py:261] Added request cmpl-ebcc43f485c6460db675244a4ab6bde7-0.
INFO 03-02 00:10:35 [logger.py:42] Received request cmpl-1a32fcec81264d32919e3f8ff2cddd26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:35 [async_llm.py:261] Added request cmpl-1a32fcec81264d32919e3f8ff2cddd26-0.
INFO 03-02 00:10:36 [logger.py:42] Received request cmpl-58b0ed2ce3944fe4bc7f36b20e218c78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:36 [async_llm.py:261] Added request cmpl-58b0ed2ce3944fe4bc7f36b20e218c78-0.
INFO 03-02 00:10:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:10:37 [logger.py:42] Received request cmpl-44d6d07b735142cbb0cd6acbe89d4046-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:37 [async_llm.py:261] Added request cmpl-44d6d07b735142cbb0cd6acbe89d4046-0.
INFO 03-02 00:10:38 [logger.py:42] Received request cmpl-4c3e3e7b3b5446068bd22979be89b9ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:38 [async_llm.py:261] Added request cmpl-4c3e3e7b3b5446068bd22979be89b9ca-0.
INFO 03-02 00:10:39 [logger.py:42] Received request cmpl-685ed035815349d184037205b961af53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:39 [async_llm.py:261] Added request cmpl-685ed035815349d184037205b961af53-0.
INFO 03-02 00:10:40 [logger.py:42] Received request cmpl-3e7299bb7bf5442bb1d5b152571a358d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:40 [async_llm.py:261] Added request cmpl-3e7299bb7bf5442bb1d5b152571a358d-0.
INFO 03-02 00:10:41 [logger.py:42] Received request cmpl-754d8632d8264a468ae7197e92c1b38a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:41 [async_llm.py:261] Added request cmpl-754d8632d8264a468ae7197e92c1b38a-0.
INFO 03-02 00:10:42 [logger.py:42] Received request cmpl-ef46a46e4f264362856ec325591805e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:42 [async_llm.py:261] Added request cmpl-ef46a46e4f264362856ec325591805e8-0.
INFO 03-02 00:10:43 [logger.py:42] Received request cmpl-3f0b536c6df14b77aa2c25c911fba1c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:43 [async_llm.py:261] Added request cmpl-3f0b536c6df14b77aa2c25c911fba1c1-0.
INFO 03-02 00:10:44 [logger.py:42] Received request cmpl-be85c5bc1b4143e8bd120c3fb8865bd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:44 [async_llm.py:261] Added request cmpl-be85c5bc1b4143e8bd120c3fb8865bd5-0.
INFO 03-02 00:10:45 [logger.py:42] Received request cmpl-602d6656402c4ca9a9717c63a57a13b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:45 [async_llm.py:261] Added request cmpl-602d6656402c4ca9a9717c63a57a13b5-0.
INFO 03-02 00:10:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:10:47 [logger.py:42] Received request cmpl-8858e08b678d40eda631c45af69debc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:47 [async_llm.py:261] Added request cmpl-8858e08b678d40eda631c45af69debc1-0.
INFO 03-02 00:10:48 [logger.py:42] Received request cmpl-844983ef6bd54f3fa6a5ca5e0d39b7a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:48 [async_llm.py:261] Added request cmpl-844983ef6bd54f3fa6a5ca5e0d39b7a9-0.
INFO 03-02 00:10:49 [logger.py:42] Received request cmpl-b8ebdcf0b14a4394b85daddfd82cd686-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:49 [async_llm.py:261] Added request cmpl-b8ebdcf0b14a4394b85daddfd82cd686-0.
INFO 03-02 00:10:50 [logger.py:42] Received request cmpl-8ff00448eb3c498095b1f0b0ebd04c58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:50 [async_llm.py:261] Added request cmpl-8ff00448eb3c498095b1f0b0ebd04c58-0.
INFO 03-02 00:10:51 [logger.py:42] Received request cmpl-d98678e6c78d4567a4acefe074c4a2b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:51 [async_llm.py:261] Added request cmpl-d98678e6c78d4567a4acefe074c4a2b4-0.
INFO 03-02 00:10:52 [logger.py:42] Received request cmpl-785bfca75f7949278d0f409473124fb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:52 [async_llm.py:261] Added request cmpl-785bfca75f7949278d0f409473124fb7-0.
INFO 03-02 00:10:53 [logger.py:42] Received request cmpl-62145ad8d6ea4cdf8df9c14c0e7be66d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:53 [async_llm.py:261] Added request cmpl-62145ad8d6ea4cdf8df9c14c0e7be66d-0.
INFO 03-02 00:10:54 [logger.py:42] Received request cmpl-ada530232e2f4b62b999e6a750b42950-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:54 [async_llm.py:261] Added request cmpl-ada530232e2f4b62b999e6a750b42950-0.
INFO 03-02 00:10:55 [logger.py:42] Received request cmpl-e426a806f25243559a560fb0144af777-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:55 [async_llm.py:261] Added request cmpl-e426a806f25243559a560fb0144af777-0.
INFO 03-02 00:10:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:10:56 [logger.py:42] Received request cmpl-cf8d01220ced470bb83b85f99aa014a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:56 [async_llm.py:261] Added request cmpl-cf8d01220ced470bb83b85f99aa014a4-0.
INFO 03-02 00:10:57 [logger.py:42] Received request cmpl-b66e21acd2cc4290aa898b9043d395fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:57 [async_llm.py:261] Added request cmpl-b66e21acd2cc4290aa898b9043d395fd-0.
INFO 03-02 00:10:58 [logger.py:42] Received request cmpl-183f3d54ec2d4b16bbc976197f9d140d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:58 [async_llm.py:261] Added request cmpl-183f3d54ec2d4b16bbc976197f9d140d-0.
INFO 03-02 00:11:00 [logger.py:42] Received request cmpl-42a79480ea8342fb802b991cd802f2b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:00 [async_llm.py:261] Added request cmpl-42a79480ea8342fb802b991cd802f2b7-0.
INFO 03-02 00:11:01 [logger.py:42] Received request cmpl-ca7947a63f754d3fa381222da19ab205-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:01 [async_llm.py:261] Added request cmpl-ca7947a63f754d3fa381222da19ab205-0.
INFO 03-02 00:11:02 [logger.py:42] Received request cmpl-e0e8a24921e548318d867b36dc1767a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:02 [async_llm.py:261] Added request cmpl-e0e8a24921e548318d867b36dc1767a3-0.
INFO 03-02 00:11:03 [logger.py:42] Received request cmpl-d7b72d09d1c943f89b2170d93ce8d1ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:03 [async_llm.py:261] Added request cmpl-d7b72d09d1c943f89b2170d93ce8d1ee-0.
INFO 03-02 00:11:04 [logger.py:42] Received request cmpl-b0313c57e7c940a99c33dc155dbeee51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:04 [async_llm.py:261] Added request cmpl-b0313c57e7c940a99c33dc155dbeee51-0.
INFO 03-02 00:11:05 [logger.py:42] Received request cmpl-451cea1120c741338667e0265024c773-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:05 [async_llm.py:261] Added request cmpl-451cea1120c741338667e0265024c773-0.
INFO 03-02 00:11:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:11:06 [logger.py:42] Received request cmpl-4f4210ad9b0e482ea5cc31073c762b75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:06 [async_llm.py:261] Added request cmpl-4f4210ad9b0e482ea5cc31073c762b75-0.
INFO 03-02 00:11:07 [logger.py:42] Received request cmpl-0a46699da9ab4501b7b90a99bb91966c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:07 [async_llm.py:261] Added request cmpl-0a46699da9ab4501b7b90a99bb91966c-0.
INFO 03-02 00:11:08 [logger.py:42] Received request cmpl-d74380df487844a297e766ea707138aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:08 [async_llm.py:261] Added request cmpl-d74380df487844a297e766ea707138aa-0.
INFO 03-02 00:11:09 [logger.py:42] Received request cmpl-04e9abd2761b49caa1b144101c88b5a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:09 [async_llm.py:261] Added request cmpl-04e9abd2761b49caa1b144101c88b5a0-0.
INFO 03-02 00:11:10 [logger.py:42] Received request cmpl-ff0541ce4be84630aeba7823d94892a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:10 [async_llm.py:261] Added request cmpl-ff0541ce4be84630aeba7823d94892a5-0.
INFO 03-02 00:11:11 [logger.py:42] Received request cmpl-0cfe5ca704f84a99b93d4d16e77c0490-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:11 [async_llm.py:261] Added request cmpl-0cfe5ca704f84a99b93d4d16e77c0490-0.
INFO 03-02 00:11:13 [logger.py:42] Received request cmpl-68277e650f064034a605f33f21c0cc40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:13 [async_llm.py:261] Added request cmpl-68277e650f064034a605f33f21c0cc40-0.
INFO 03-02 00:11:14 [logger.py:42] Received request cmpl-239d47df0c1e4f68b650286ed319a536-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:14 [async_llm.py:261] Added request cmpl-239d47df0c1e4f68b650286ed319a536-0.
INFO 03-02 00:11:15 [logger.py:42] Received request cmpl-73d03a573deb45b1972a92c66e6f0b2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:15 [async_llm.py:261] Added request cmpl-73d03a573deb45b1972a92c66e6f0b2c-0.
INFO 03-02 00:11:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:11:16 [logger.py:42] Received request cmpl-6a402da2bc7644b487bfbd93837d2432-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:16 [async_llm.py:261] Added request cmpl-6a402da2bc7644b487bfbd93837d2432-0.
INFO 03-02 00:11:17 [logger.py:42] Received request cmpl-741260882fac42a59ddaffae8336e869-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:17 [async_llm.py:261] Added request cmpl-741260882fac42a59ddaffae8336e869-0.
INFO 03-02 00:11:18 [logger.py:42] Received request cmpl-9d3d30d932364471abd1a33fa482667c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:18 [async_llm.py:261] Added request cmpl-9d3d30d932364471abd1a33fa482667c-0.
INFO 03-02 00:11:19 [logger.py:42] Received request cmpl-4d32e4570b2c4616b3f5e0a5c55283a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:19 [async_llm.py:261] Added request cmpl-4d32e4570b2c4616b3f5e0a5c55283a7-0.
INFO 03-02 00:11:20 [logger.py:42] Received request cmpl-ca707e93dcd24f848f0f949e5fc5bc9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:20 [async_llm.py:261] Added request cmpl-ca707e93dcd24f848f0f949e5fc5bc9b-0.
INFO 03-02 00:11:21 [logger.py:42] Received request cmpl-207a69d1d0e84005836371bf6d81373d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:21 [async_llm.py:261] Added request cmpl-207a69d1d0e84005836371bf6d81373d-0.
INFO 03-02 00:11:22 [logger.py:42] Received request cmpl-38cfe88b50f24748ba84e6738198c355-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:22 [async_llm.py:261] Added request cmpl-38cfe88b50f24748ba84e6738198c355-0.
INFO 03-02 00:11:23 [logger.py:42] Received request cmpl-7eacfb3cedb441b5969a10771bb81ca9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:23 [async_llm.py:261] Added request cmpl-7eacfb3cedb441b5969a10771bb81ca9-0.
INFO 03-02 00:11:24 [logger.py:42] Received request cmpl-8a32575eaac34208908e4957946ea693-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:24 [async_llm.py:261] Added request cmpl-8a32575eaac34208908e4957946ea693-0.
INFO 03-02 00:11:26 [logger.py:42] Received request cmpl-d830f441cc3145719f44fbc80f5107ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:26 [async_llm.py:261] Added request cmpl-d830f441cc3145719f44fbc80f5107ee-0.
INFO 03-02 00:11:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:11:27 [logger.py:42] Received request cmpl-192213012f8a4d199d2f98a2561ec7f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:27 [async_llm.py:261] Added request cmpl-192213012f8a4d199d2f98a2561ec7f7-0.
INFO 03-02 00:11:28 [logger.py:42] Received request cmpl-75dc300793da47c899ca379caf058b68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:28 [async_llm.py:261] Added request cmpl-75dc300793da47c899ca379caf058b68-0.
INFO 03-02 00:11:29 [logger.py:42] Received request cmpl-a94463e1703f4150a87c40e0f6663ae7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:29 [async_llm.py:261] Added request cmpl-a94463e1703f4150a87c40e0f6663ae7-0.
INFO 03-02 00:11:30 [logger.py:42] Received request cmpl-7a9727ce31e945fe9f5ccb236300d2b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:30 [async_llm.py:261] Added request cmpl-7a9727ce31e945fe9f5ccb236300d2b5-0.
INFO 03-02 00:11:31 [logger.py:42] Received request cmpl-c06ac41368214d4396c791acb5b188fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:31 [async_llm.py:261] Added request cmpl-c06ac41368214d4396c791acb5b188fb-0.
INFO 03-02 00:11:32 [logger.py:42] Received request cmpl-0a378291279a4b8e885f7993a66e9c3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:32 [async_llm.py:261] Added request cmpl-0a378291279a4b8e885f7993a66e9c3f-0.
INFO 03-02 00:11:33 [logger.py:42] Received request cmpl-665eba0d16c14aff9592d504a718489f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:33 [async_llm.py:261] Added request cmpl-665eba0d16c14aff9592d504a718489f-0.
INFO 03-02 00:11:34 [logger.py:42] Received request cmpl-07fba763e4ff4d6398d56e2a7c9403dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:34 [async_llm.py:261] Added request cmpl-07fba763e4ff4d6398d56e2a7c9403dd-0.
INFO 03-02 00:11:35 [logger.py:42] Received request cmpl-c0adc4240c7547d998fc558727605ca8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:35 [async_llm.py:261] Added request cmpl-c0adc4240c7547d998fc558727605ca8-0.
INFO 03-02 00:11:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:11:36 [logger.py:42] Received request cmpl-9c479b5c0e244de5a7437af1f3feb71d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:36 [async_llm.py:261] Added request cmpl-9c479b5c0e244de5a7437af1f3feb71d-0.
INFO 03-02 00:11:37 [logger.py:42] Received request cmpl-7f4e8477dade48f6a91882491fbcef47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:37 [async_llm.py:261] Added request cmpl-7f4e8477dade48f6a91882491fbcef47-0.
INFO 03-02 00:11:39 [logger.py:42] Received request cmpl-acfee36dfdca4f368ef8c8e6321ad548-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:39 [async_llm.py:261] Added request cmpl-acfee36dfdca4f368ef8c8e6321ad548-0.
INFO 03-02 00:11:40 [logger.py:42] Received request cmpl-ff221c86a3eb44f2818e0a1d3bb6b492-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:40 [async_llm.py:261] Added request cmpl-ff221c86a3eb44f2818e0a1d3bb6b492-0.
INFO 03-02 00:11:41 [logger.py:42] Received request cmpl-f647d6136ddb4e0aa73ab82390f13732-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:41 [async_llm.py:261] Added request cmpl-f647d6136ddb4e0aa73ab82390f13732-0.
INFO 03-02 00:11:42 [logger.py:42] Received request cmpl-8a941c0036e64e67ad3961de6f9a257e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:42 [async_llm.py:261] Added request cmpl-8a941c0036e64e67ad3961de6f9a257e-0.
INFO 03-02 00:11:43 [logger.py:42] Received request cmpl-41ae36eb66614c649e2ddef8db1dee41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:43 [async_llm.py:261] Added request cmpl-41ae36eb66614c649e2ddef8db1dee41-0.
INFO 03-02 00:11:44 [logger.py:42] Received request cmpl-fc856077e85c4e439713f7102a51db66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:44 [async_llm.py:261] Added request cmpl-fc856077e85c4e439713f7102a51db66-0.
INFO 03-02 00:11:45 [logger.py:42] Received request cmpl-df90a4d1774347f9add18df5193e9a8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:45 [async_llm.py:261] Added request cmpl-df90a4d1774347f9add18df5193e9a8e-0.
INFO 03-02 00:11:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:11:46 [logger.py:42] Received request cmpl-62726d5fbce043da8f745d822d3fc539-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:46 [async_llm.py:261] Added request cmpl-62726d5fbce043da8f745d822d3fc539-0.
INFO 03-02 00:11:47 [logger.py:42] Received request cmpl-b9e66632f5944d1a880cf1153c98bb65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:47 [async_llm.py:261] Added request cmpl-b9e66632f5944d1a880cf1153c98bb65-0.
INFO 03-02 00:11:48 [logger.py:42] Received request cmpl-97f84f3748214cdea2b8869b617b34ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:48 [async_llm.py:261] Added request cmpl-97f84f3748214cdea2b8869b617b34ea-0.
INFO 03-02 00:11:49 [logger.py:42] Received request cmpl-f33f2c942fb64872872b38687089d4e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:49 [async_llm.py:261] Added request cmpl-f33f2c942fb64872872b38687089d4e5-0.
INFO 03-02 00:11:50 [logger.py:42] Received request cmpl-f32009601965430b956e9acce9d87b91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:50 [async_llm.py:261] Added request cmpl-f32009601965430b956e9acce9d87b91-0.
INFO 03-02 00:11:52 [logger.py:42] Received request cmpl-7fa49a5d4528484d8366a8b494f2099b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:52 [async_llm.py:261] Added request cmpl-7fa49a5d4528484d8366a8b494f2099b-0.
INFO 03-02 00:11:53 [logger.py:42] Received request cmpl-748aa711774340368e5d2618d442adb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:53 [async_llm.py:261] Added request cmpl-748aa711774340368e5d2618d442adb9-0.
INFO 03-02 00:11:54 [logger.py:42] Received request cmpl-98c269b7d0d34a59b0499d62b4afddc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:54 [async_llm.py:261] Added request cmpl-98c269b7d0d34a59b0499d62b4afddc9-0.
INFO 03-02 00:11:55 [logger.py:42] Received request cmpl-994d734351d14320b338bcc6f574e5cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:55 [async_llm.py:261] Added request cmpl-994d734351d14320b338bcc6f574e5cb-0.
INFO 03-02 00:11:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:11:56 [logger.py:42] Received request cmpl-6e96b72fc3fc45e58e7efb6524396975-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:56 [async_llm.py:261] Added request cmpl-6e96b72fc3fc45e58e7efb6524396975-0.
INFO 03-02 00:11:57 [logger.py:42] Received request cmpl-fe0f9ab3eb514cfeb3a660d149936d37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:57 [async_llm.py:261] Added request cmpl-fe0f9ab3eb514cfeb3a660d149936d37-0.
INFO 03-02 00:11:58 [logger.py:42] Received request cmpl-212538ca36bc4e29a3f82e1f769a3694-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:58 [async_llm.py:261] Added request cmpl-212538ca36bc4e29a3f82e1f769a3694-0.
INFO 03-02 00:11:59 [logger.py:42] Received request cmpl-2336e02b032b472bb9d410b105a12a96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:59 [async_llm.py:261] Added request cmpl-2336e02b032b472bb9d410b105a12a96-0.
INFO 03-02 00:12:00 [logger.py:42] Received request cmpl-ca0d4dc3666f409a957c26da30e980b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:00 [async_llm.py:261] Added request cmpl-ca0d4dc3666f409a957c26da30e980b2-0.
INFO 03-02 00:12:01 [logger.py:42] Received request cmpl-4a0751e475f141c0852e504ccc13f92c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:01 [async_llm.py:261] Added request cmpl-4a0751e475f141c0852e504ccc13f92c-0.
INFO 03-02 00:12:02 [logger.py:42] Received request cmpl-da44be3b4edf46048fbb880c42bd45e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:02 [async_llm.py:261] Added request cmpl-da44be3b4edf46048fbb880c42bd45e6-0.
INFO 03-02 00:12:03 [logger.py:42] Received request cmpl-00eeafbbb745481488883c2b71c75c6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:03 [async_llm.py:261] Added request cmpl-00eeafbbb745481488883c2b71c75c6a-0.
INFO 03-02 00:12:05 [logger.py:42] Received request cmpl-e17d5b0fc1a54407966af3398ee46835-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:05 [async_llm.py:261] Added request cmpl-e17d5b0fc1a54407966af3398ee46835-0.
INFO 03-02 00:12:06 [logger.py:42] Received request cmpl-7bdc105373b745299ee7cd0900bdf405-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:06 [async_llm.py:261] Added request cmpl-7bdc105373b745299ee7cd0900bdf405-0.
INFO 03-02 00:12:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:12:07 [logger.py:42] Received request cmpl-561279e8440e441ab118879964a6cd21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:07 [async_llm.py:261] Added request cmpl-561279e8440e441ab118879964a6cd21-0.
INFO 03-02 00:12:08 [logger.py:42] Received request cmpl-f012e230f1c74cc39f763bf064ff482e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:08 [async_llm.py:261] Added request cmpl-f012e230f1c74cc39f763bf064ff482e-0.
INFO 03-02 00:12:09 [logger.py:42] Received request cmpl-f08af839c2ba44ebb85823490baa53d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:09 [async_llm.py:261] Added request cmpl-f08af839c2ba44ebb85823490baa53d3-0.
INFO 03-02 00:12:10 [logger.py:42] Received request cmpl-ea598a08ca7e429f8437a476d37b6128-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:10 [async_llm.py:261] Added request cmpl-ea598a08ca7e429f8437a476d37b6128-0.
INFO 03-02 00:12:11 [logger.py:42] Received request cmpl-9bc20793f06245c5bdc797c382720f46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:11 [async_llm.py:261] Added request cmpl-9bc20793f06245c5bdc797c382720f46-0.
INFO 03-02 00:12:12 [logger.py:42] Received request cmpl-ca814ece133841fab1fd363b4e28e1a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:12 [async_llm.py:261] Added request cmpl-ca814ece133841fab1fd363b4e28e1a5-0.
INFO 03-02 00:12:13 [logger.py:42] Received request cmpl-0f3b8e0992594d56b837c8813d914dec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:13 [async_llm.py:261] Added request cmpl-0f3b8e0992594d56b837c8813d914dec-0.
INFO 03-02 00:12:14 [logger.py:42] Received request cmpl-9f8efa55dc2c44f39b797c1bf4a77a92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:14 [async_llm.py:261] Added request cmpl-9f8efa55dc2c44f39b797c1bf4a77a92-0.
INFO 03-02 00:12:15 [logger.py:42] Received request cmpl-49cf6e33846e41aaaf54530555470492-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:15 [async_llm.py:261] Added request cmpl-49cf6e33846e41aaaf54530555470492-0.
INFO 03-02 00:12:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:12:16 [logger.py:42] Received request cmpl-6f92e4b1a77d4b0f91cc6bfe5bf32896-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:16 [async_llm.py:261] Added request cmpl-6f92e4b1a77d4b0f91cc6bfe5bf32896-0.
INFO 03-02 00:12:18 [logger.py:42] Received request cmpl-dc52ccbc398045248f0bd4c32721bd80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:18 [async_llm.py:261] Added request cmpl-dc52ccbc398045248f0bd4c32721bd80-0.
INFO 03-02 00:12:19 [logger.py:42] Received request cmpl-a0984ed937de43a7b2f0bd14d0bd8a22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:19 [async_llm.py:261] Added request cmpl-a0984ed937de43a7b2f0bd14d0bd8a22-0.
INFO 03-02 00:12:20 [logger.py:42] Received request cmpl-bc80bd9343b54fd883d3ab77dbd74e95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:20 [async_llm.py:261] Added request cmpl-bc80bd9343b54fd883d3ab77dbd74e95-0.
INFO 03-02 00:12:21 [logger.py:42] Received request cmpl-ec754e40268844b7a8d7b04352677e2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:21 [async_llm.py:261] Added request cmpl-ec754e40268844b7a8d7b04352677e2a-0.
INFO 03-02 00:12:22 [logger.py:42] Received request cmpl-34959866b2ee4556a6017b9d7c49bdfd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:22 [async_llm.py:261] Added request cmpl-34959866b2ee4556a6017b9d7c49bdfd-0.
INFO 03-02 00:12:23 [logger.py:42] Received request cmpl-a10a3b5f53374f5089dd5ce2f147c9cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:23 [async_llm.py:261] Added request cmpl-a10a3b5f53374f5089dd5ce2f147c9cb-0.
INFO 03-02 00:12:24 [logger.py:42] Received request cmpl-a9830314700641caa0709cc6f65599d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:24 [async_llm.py:261] Added request cmpl-a9830314700641caa0709cc6f65599d1-0.
INFO 03-02 00:12:25 [logger.py:42] Received request cmpl-70088ead01a54d59b64a9740a762dff7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:25 [async_llm.py:261] Added request cmpl-70088ead01a54d59b64a9740a762dff7-0.
INFO 03-02 00:12:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:12:26 [logger.py:42] Received request cmpl-f5309c88e5eb41c8937d906ca2275612-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:26 [async_llm.py:261] Added request cmpl-f5309c88e5eb41c8937d906ca2275612-0.
INFO 03-02 00:12:27 [logger.py:42] Received request cmpl-cc0d8cd325774c73a192a81e25451d36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:27 [async_llm.py:261] Added request cmpl-cc0d8cd325774c73a192a81e25451d36-0.
INFO 03-02 00:12:28 [logger.py:42] Received request cmpl-dad4f9a97fd549bd9e61684c0e426659-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:28 [async_llm.py:261] Added request cmpl-dad4f9a97fd549bd9e61684c0e426659-0.
INFO 03-02 00:12:29 [logger.py:42] Received request cmpl-5c94cdc52c884f49a337173c832d1544-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:29 [async_llm.py:261] Added request cmpl-5c94cdc52c884f49a337173c832d1544-0.
INFO 03-02 00:12:31 [logger.py:42] Received request cmpl-8e13b1985f69471dae5e0701ca64273b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:31 [async_llm.py:261] Added request cmpl-8e13b1985f69471dae5e0701ca64273b-0.
INFO 03-02 00:12:32 [logger.py:42] Received request cmpl-1ae3aea024b24c1a8c9c2773c30898c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:32 [async_llm.py:261] Added request cmpl-1ae3aea024b24c1a8c9c2773c30898c0-0.
INFO 03-02 00:12:33 [logger.py:42] Received request cmpl-bde629eed14c42f69fa3790c96f643ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:33 [async_llm.py:261] Added request cmpl-bde629eed14c42f69fa3790c96f643ff-0.
INFO 03-02 00:12:34 [logger.py:42] Received request cmpl-e5666a40084442319be9ffae1c3872d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:34 [async_llm.py:261] Added request cmpl-e5666a40084442319be9ffae1c3872d5-0.
INFO 03-02 00:12:35 [logger.py:42] Received request cmpl-ba59475c5e8d42e996c4ce96b345ce41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:35 [async_llm.py:261] Added request cmpl-ba59475c5e8d42e996c4ce96b345ce41-0.
INFO 03-02 00:12:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:12:36 [logger.py:42] Received request cmpl-586e4c29f6034a43bf66a1e1b4eabcc6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:36 [async_llm.py:261] Added request cmpl-586e4c29f6034a43bf66a1e1b4eabcc6-0.
INFO 03-02 00:12:37 [logger.py:42] Received request cmpl-56138f39125c41cc83246f9e43378725-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:37 [async_llm.py:261] Added request cmpl-56138f39125c41cc83246f9e43378725-0.
INFO 03-02 00:12:38 [logger.py:42] Received request cmpl-6a9f511ba69c4bd4999eeb6aa4e16fa7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:38 [async_llm.py:261] Added request cmpl-6a9f511ba69c4bd4999eeb6aa4e16fa7-0.
INFO 03-02 00:12:39 [logger.py:42] Received request cmpl-6a13b21e095a4a98a0659561e3cbb907-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:39 [async_llm.py:261] Added request cmpl-6a13b21e095a4a98a0659561e3cbb907-0.
INFO 03-02 00:12:40 [logger.py:42] Received request cmpl-46444cec81d84dafb2c25e8520f36761-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:40 [async_llm.py:261] Added request cmpl-46444cec81d84dafb2c25e8520f36761-0.
INFO 03-02 00:12:41 [logger.py:42] Received request cmpl-1b5b8d3134b3453385b59b19c9db0137-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:41 [async_llm.py:261] Added request cmpl-1b5b8d3134b3453385b59b19c9db0137-0.
INFO 03-02 00:12:42 [logger.py:42] Received request cmpl-6a68644841bd4b69adeed14b3b010b66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:42 [async_llm.py:261] Added request cmpl-6a68644841bd4b69adeed14b3b010b66-0.
INFO 03-02 00:12:44 [logger.py:42] Received request cmpl-1f8b3a5d2ba04c968a13e2977abc929e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:44 [async_llm.py:261] Added request cmpl-1f8b3a5d2ba04c968a13e2977abc929e-0.
INFO 03-02 00:12:45 [logger.py:42] Received request cmpl-7de17ea293c24612aa948cd17077d3cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:45 [async_llm.py:261] Added request cmpl-7de17ea293c24612aa948cd17077d3cb-0.
INFO 03-02 00:12:46 [logger.py:42] Received request cmpl-6554d8f3f5484209ac9192b1cc0b8647-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:46 [async_llm.py:261] Added request cmpl-6554d8f3f5484209ac9192b1cc0b8647-0.
INFO 03-02 00:12:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:12:47 [logger.py:42] Received request cmpl-d4ed9e01f5844df188482b4a2ee5ca31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:47 [async_llm.py:261] Added request cmpl-d4ed9e01f5844df188482b4a2ee5ca31-0.
INFO 03-02 00:12:48 [logger.py:42] Received request cmpl-031f2ddafdbd481c8884f40247c77b4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:48 [async_llm.py:261] Added request cmpl-031f2ddafdbd481c8884f40247c77b4a-0.
INFO 03-02 00:12:49 [logger.py:42] Received request cmpl-a59af5deae734bf6aa99140ca6abf0c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:49 [async_llm.py:261] Added request cmpl-a59af5deae734bf6aa99140ca6abf0c3-0.
INFO 03-02 00:12:50 [logger.py:42] Received request cmpl-d62c0748452d4b20863fd5466990d0b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:50 [async_llm.py:261] Added request cmpl-d62c0748452d4b20863fd5466990d0b2-0.
INFO 03-02 00:12:51 [logger.py:42] Received request cmpl-9824730822e941b29bd69bb8d6dacdac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:51 [async_llm.py:261] Added request cmpl-9824730822e941b29bd69bb8d6dacdac-0.
INFO 03-02 00:12:52 [logger.py:42] Received request cmpl-218618213c594b77ba4f837b9595efd9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:52 [async_llm.py:261] Added request cmpl-218618213c594b77ba4f837b9595efd9-0.
INFO 03-02 00:12:53 [logger.py:42] Received request cmpl-12cc823a233e420f9a0d2c22a66663e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:53 [async_llm.py:261] Added request cmpl-12cc823a233e420f9a0d2c22a66663e3-0.
INFO 03-02 00:12:54 [logger.py:42] Received request cmpl-0a9618260ccf43198bf86419c2a870b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:54 [async_llm.py:261] Added request cmpl-0a9618260ccf43198bf86419c2a870b7-0.
INFO 03-02 00:12:56 [logger.py:42] Received request cmpl-2edbadca30644251b8a4ccc5d84b1e9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:56 [async_llm.py:261] Added request cmpl-2edbadca30644251b8a4ccc5d84b1e9d-0.
INFO 03-02 00:12:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:12:57 [logger.py:42] Received request cmpl-4162e3aa9e604697888eaec89e8aeb0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:57 [async_llm.py:261] Added request cmpl-4162e3aa9e604697888eaec89e8aeb0c-0.
INFO 03-02 00:12:58 [logger.py:42] Received request cmpl-4f003de8cf674d5d9da62fc9d262bd3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:58 [async_llm.py:261] Added request cmpl-4f003de8cf674d5d9da62fc9d262bd3b-0.
INFO 03-02 00:12:59 [logger.py:42] Received request cmpl-3ad1eeac32f14b89afe8eb9e7a9c9c5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:59 [async_llm.py:261] Added request cmpl-3ad1eeac32f14b89afe8eb9e7a9c9c5e-0.
INFO 03-02 00:13:00 [logger.py:42] Received request cmpl-a3f4fa873c6d48e9bf02208fb28ef2b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:00 [async_llm.py:261] Added request cmpl-a3f4fa873c6d48e9bf02208fb28ef2b5-0.
INFO 03-02 00:13:01 [logger.py:42] Received request cmpl-ceb9009417484b33a8f69287e26d047a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:01 [async_llm.py:261] Added request cmpl-ceb9009417484b33a8f69287e26d047a-0.
INFO 03-02 00:13:02 [logger.py:42] Received request cmpl-7366a120ae8d4aeb807e03ec7ba669e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:02 [async_llm.py:261] Added request cmpl-7366a120ae8d4aeb807e03ec7ba669e8-0.
INFO 03-02 00:13:03 [logger.py:42] Received request cmpl-eabbe0658ae345c286bd45df730f70d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:03 [async_llm.py:261] Added request cmpl-eabbe0658ae345c286bd45df730f70d2-0.
INFO 03-02 00:13:04 [logger.py:42] Received request cmpl-c699338504144eab9e04798d7ea61692-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:04 [async_llm.py:261] Added request cmpl-c699338504144eab9e04798d7ea61692-0.
INFO 03-02 00:13:05 [logger.py:42] Received request cmpl-2488d832cfb64f06b937309bed4c3085-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:05 [async_llm.py:261] Added request cmpl-2488d832cfb64f06b937309bed4c3085-0.
INFO 03-02 00:13:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:13:06 [logger.py:42] Received request cmpl-1ab76196065e4195b006cf56805766ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:06 [async_llm.py:261] Added request cmpl-1ab76196065e4195b006cf56805766ca-0.
INFO 03-02 00:13:07 [logger.py:42] Received request cmpl-271981eb6abd4c7dbe000898fa5982e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:07 [async_llm.py:261] Added request cmpl-271981eb6abd4c7dbe000898fa5982e9-0.
INFO 03-02 00:13:08 [logger.py:42] Received request cmpl-5ef6707b84e740b1a1e45e59234fbca7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:09 [async_llm.py:261] Added request cmpl-5ef6707b84e740b1a1e45e59234fbca7-0.
INFO 03-02 00:13:10 [logger.py:42] Received request cmpl-5dde99be3a29459481f69b7013a93524-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:10 [async_llm.py:261] Added request cmpl-5dde99be3a29459481f69b7013a93524-0.
INFO 03-02 00:13:11 [logger.py:42] Received request cmpl-d09eeb9d963845e29ced2361525b9760-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:11 [async_llm.py:261] Added request cmpl-d09eeb9d963845e29ced2361525b9760-0.
INFO 03-02 00:13:12 [logger.py:42] Received request cmpl-428390c6a9dd40d69fb930ad988e067e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:12 [async_llm.py:261] Added request cmpl-428390c6a9dd40d69fb930ad988e067e-0.
INFO 03-02 00:13:13 [logger.py:42] Received request cmpl-61a874eb3ab440d8a1f6f3a29ac3b121-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:13 [async_llm.py:261] Added request cmpl-61a874eb3ab440d8a1f6f3a29ac3b121-0.
INFO 03-02 00:13:14 [logger.py:42] Received request cmpl-cd834f03ce8a45fc8be968877d171751-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:14 [async_llm.py:261] Added request cmpl-cd834f03ce8a45fc8be968877d171751-0.
INFO 03-02 00:13:15 [logger.py:42] Received request cmpl-01e1c967e60c43878fc3f71e15743cce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:15 [async_llm.py:261] Added request cmpl-01e1c967e60c43878fc3f71e15743cce-0.
INFO 03-02 00:13:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:13:16 [logger.py:42] Received request cmpl-f0cb0fc927ac40a2b5410c21df2a3123-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:16 [async_llm.py:261] Added request cmpl-f0cb0fc927ac40a2b5410c21df2a3123-0.
INFO 03-02 00:13:17 [logger.py:42] Received request cmpl-ddeb3fbddf8e4247bdf41f3207f639d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:17 [async_llm.py:261] Added request cmpl-ddeb3fbddf8e4247bdf41f3207f639d4-0.
INFO 03-02 00:13:18 [logger.py:42] Received request cmpl-7b72f6679f674c3d8590fdd09d1b5ff1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:18 [async_llm.py:261] Added request cmpl-7b72f6679f674c3d8590fdd09d1b5ff1-0.
INFO 03-02 00:13:19 [logger.py:42] Received request cmpl-ab8df6fbe455411d86b8bfb43da924cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:19 [async_llm.py:261] Added request cmpl-ab8df6fbe455411d86b8bfb43da924cb-0.
INFO 03-02 00:13:20 [logger.py:42] Received request cmpl-654abef74da94627b8e9a0c1c8571a61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:20 [async_llm.py:261] Added request cmpl-654abef74da94627b8e9a0c1c8571a61-0.
INFO 03-02 00:13:21 [logger.py:42] Received request cmpl-0a470c38a6d94df9ab226b6ca1417424-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:21 [async_llm.py:261] Added request cmpl-0a470c38a6d94df9ab226b6ca1417424-0.
INFO 03-02 00:13:23 [logger.py:42] Received request cmpl-659c03998b2347a5b8cdc21484b32674-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:23 [async_llm.py:261] Added request cmpl-659c03998b2347a5b8cdc21484b32674-0.
INFO 03-02 00:13:24 [logger.py:42] Received request cmpl-3becece4ae2e459bb7a68f96cedfbcf5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:24 [async_llm.py:261] Added request cmpl-3becece4ae2e459bb7a68f96cedfbcf5-0.
INFO 03-02 00:13:25 [logger.py:42] Received request cmpl-d5638bd1ab0d46928658098dd56b4c3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:25 [async_llm.py:261] Added request cmpl-d5638bd1ab0d46928658098dd56b4c3f-0.
INFO 03-02 00:13:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:13:26 [logger.py:42] Received request cmpl-f893b56a3e0346fb8f98697f6849f7d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:26 [async_llm.py:261] Added request cmpl-f893b56a3e0346fb8f98697f6849f7d8-0.
INFO 03-02 00:13:27 [logger.py:42] Received request cmpl-7712929efda84446b6d238c6abf91917-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:27 [async_llm.py:261] Added request cmpl-7712929efda84446b6d238c6abf91917-0.
INFO 03-02 00:13:28 [logger.py:42] Received request cmpl-e6813f94f11c4c608ab67e92274a3f27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:28 [async_llm.py:261] Added request cmpl-e6813f94f11c4c608ab67e92274a3f27-0.
INFO 03-02 00:13:29 [logger.py:42] Received request cmpl-ad8bc73abb0b4b61b95ff693cf03e368-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:29 [async_llm.py:261] Added request cmpl-ad8bc73abb0b4b61b95ff693cf03e368-0.
INFO 03-02 00:13:30 [logger.py:42] Received request cmpl-84d350e13f9e460c86d95a36366d818a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:30 [async_llm.py:261] Added request cmpl-84d350e13f9e460c86d95a36366d818a-0.
INFO 03-02 00:13:31 [logger.py:42] Received request cmpl-13d0c1a097ff43f08ec231ac630d3af2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:31 [async_llm.py:261] Added request cmpl-13d0c1a097ff43f08ec231ac630d3af2-0.
INFO 03-02 00:13:32 [logger.py:42] Received request cmpl-c1673e15fd6644cea5ef50b8a340d746-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:32 [async_llm.py:261] Added request cmpl-c1673e15fd6644cea5ef50b8a340d746-0.
INFO 03-02 00:13:33 [logger.py:42] Received request cmpl-3686b071574f436a83232718730cb6a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:33 [async_llm.py:261] Added request cmpl-3686b071574f436a83232718730cb6a9-0.
INFO 03-02 00:13:34 [logger.py:42] Received request cmpl-aea616b603574dcaa6b99abb617d80b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:34 [async_llm.py:261] Added request cmpl-aea616b603574dcaa6b99abb617d80b4-0.
INFO 03-02 00:13:36 [logger.py:42] Received request cmpl-2279789bbe2446c0b0f2ee58cd7ac06c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:36 [async_llm.py:261] Added request cmpl-2279789bbe2446c0b0f2ee58cd7ac06c-0.
INFO 03-02 00:13:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:13:37 [logger.py:42] Received request cmpl-0d0dc2df7ba74479968f29483a45a9e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:37 [async_llm.py:261] Added request cmpl-0d0dc2df7ba74479968f29483a45a9e4-0.
INFO 03-02 00:13:38 [logger.py:42] Received request cmpl-421c18e381fa470283466e7db3af29df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:38 [async_llm.py:261] Added request cmpl-421c18e381fa470283466e7db3af29df-0.
INFO 03-02 00:13:39 [logger.py:42] Received request cmpl-1abc0f1a5c7146a8aa9c39c43937ea58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:39 [async_llm.py:261] Added request cmpl-1abc0f1a5c7146a8aa9c39c43937ea58-0.
INFO 03-02 00:13:40 [logger.py:42] Received request cmpl-07f62e1fb06d4c80ae365b7ca866d754-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:40 [async_llm.py:261] Added request cmpl-07f62e1fb06d4c80ae365b7ca866d754-0.
INFO 03-02 00:13:41 [logger.py:42] Received request cmpl-b171ab97defe43e59df47cdc60431463-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:41 [async_llm.py:261] Added request cmpl-b171ab97defe43e59df47cdc60431463-0.
INFO 03-02 00:13:42 [logger.py:42] Received request cmpl-06c7461a87ea4d6e9c720a5fa3f32e22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:42 [async_llm.py:261] Added request cmpl-06c7461a87ea4d6e9c720a5fa3f32e22-0.
INFO 03-02 00:13:43 [logger.py:42] Received request cmpl-5fcfaf60591647618e655aa4083e814b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:43 [async_llm.py:261] Added request cmpl-5fcfaf60591647618e655aa4083e814b-0.
INFO 03-02 00:13:44 [logger.py:42] Received request cmpl-413746f37dd846a280eed874868fd707-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:44 [async_llm.py:261] Added request cmpl-413746f37dd846a280eed874868fd707-0.
INFO 03-02 00:13:45 [logger.py:42] Received request cmpl-595bcbb2f0be4c1bb99515688d3ec250-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:45 [async_llm.py:261] Added request cmpl-595bcbb2f0be4c1bb99515688d3ec250-0.
INFO 03-02 00:13:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:13:46 [logger.py:42] Received request cmpl-e681bbea3bbf49fbabb4b2599e77e46f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:46 [async_llm.py:261] Added request cmpl-e681bbea3bbf49fbabb4b2599e77e46f-0.
INFO 03-02 00:13:47 [logger.py:42] Received request cmpl-9094b2f220a74c62a1750fb22a45530d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:47 [async_llm.py:261] Added request cmpl-9094b2f220a74c62a1750fb22a45530d-0.
INFO 03-02 00:13:49 [logger.py:42] Received request cmpl-42e0d74127fe46b49c21e6bf1141fbe5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:49 [async_llm.py:261] Added request cmpl-42e0d74127fe46b49c21e6bf1141fbe5-0.
INFO 03-02 00:13:50 [logger.py:42] Received request cmpl-35358e64c80448498b893dcae6bdcb73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:50 [async_llm.py:261] Added request cmpl-35358e64c80448498b893dcae6bdcb73-0.
INFO 03-02 00:13:51 [logger.py:42] Received request cmpl-0ed1e251442c4462acc8d3455d2e6f18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:51 [async_llm.py:261] Added request cmpl-0ed1e251442c4462acc8d3455d2e6f18-0.
INFO 03-02 00:13:52 [logger.py:42] Received request cmpl-7e1fed33cc6d47d9a3890ab865f7b65a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:52 [async_llm.py:261] Added request cmpl-7e1fed33cc6d47d9a3890ab865f7b65a-0.
INFO 03-02 00:13:53 [logger.py:42] Received request cmpl-117e93274c9a43dfbfb076af54696c76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:53 [async_llm.py:261] Added request cmpl-117e93274c9a43dfbfb076af54696c76-0.
INFO 03-02 00:13:54 [logger.py:42] Received request cmpl-956c76c4fb05486b8c465264fd9edb92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:54 [async_llm.py:261] Added request cmpl-956c76c4fb05486b8c465264fd9edb92-0.
INFO 03-02 00:13:55 [logger.py:42] Received request cmpl-4984cbd925174b559f529ba2aee4911a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:55 [async_llm.py:261] Added request cmpl-4984cbd925174b559f529ba2aee4911a-0.
INFO 03-02 00:13:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:13:56 [logger.py:42] Received request cmpl-2af107cce25c42b88ecf64a51d04a32b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:56 [async_llm.py:261] Added request cmpl-2af107cce25c42b88ecf64a51d04a32b-0.
INFO 03-02 00:13:57 [logger.py:42] Received request cmpl-676173fdee914e6fb6b54b7031b84c01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:57 [async_llm.py:261] Added request cmpl-676173fdee914e6fb6b54b7031b84c01-0.
INFO 03-02 00:13:58 [logger.py:42] Received request cmpl-5c877cc6efaa4e408c400fe78fda465e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:58 [async_llm.py:261] Added request cmpl-5c877cc6efaa4e408c400fe78fda465e-0.
INFO 03-02 00:13:59 [logger.py:42] Received request cmpl-4ebec10528f34c4281a2f6ad4cf075c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:59 [async_llm.py:261] Added request cmpl-4ebec10528f34c4281a2f6ad4cf075c1-0.
INFO 03-02 00:14:00 [logger.py:42] Received request cmpl-a17235639c154c3fba1517f721c8c6af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:00 [async_llm.py:261] Added request cmpl-a17235639c154c3fba1517f721c8c6af-0.
INFO 03-02 00:14:02 [logger.py:42] Received request cmpl-20a9fe59a67c4d58b9248860f8bc3348-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:02 [async_llm.py:261] Added request cmpl-20a9fe59a67c4d58b9248860f8bc3348-0.
INFO 03-02 00:14:03 [logger.py:42] Received request cmpl-2c9b9c816ae443e5afa2acfceffec8e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:03 [async_llm.py:261] Added request cmpl-2c9b9c816ae443e5afa2acfceffec8e5-0.
INFO 03-02 00:14:04 [logger.py:42] Received request cmpl-c51ee38a27074895a65dbd69b38905e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:04 [async_llm.py:261] Added request cmpl-c51ee38a27074895a65dbd69b38905e6-0.
INFO 03-02 00:14:05 [logger.py:42] Received request cmpl-27d51af3c1b04adcba607efd023ceba6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:05 [async_llm.py:261] Added request cmpl-27d51af3c1b04adcba607efd023ceba6-0.
INFO 03-02 00:14:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:14:06 [logger.py:42] Received request cmpl-f5a996bec0a44334b2bc07cb91a64504-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:06 [async_llm.py:261] Added request cmpl-f5a996bec0a44334b2bc07cb91a64504-0.
INFO 03-02 00:14:07 [logger.py:42] Received request cmpl-09bef8e66ef84b8293b27b249d276fc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:07 [async_llm.py:261] Added request cmpl-09bef8e66ef84b8293b27b249d276fc7-0.
INFO 03-02 00:14:08 [logger.py:42] Received request cmpl-168a72cd24ed4eecbba278d232974696-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:08 [async_llm.py:261] Added request cmpl-168a72cd24ed4eecbba278d232974696-0.
INFO 03-02 00:14:09 [logger.py:42] Received request cmpl-af9b4f27588844a3817d2b3636c16909-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:09 [async_llm.py:261] Added request cmpl-af9b4f27588844a3817d2b3636c16909-0.
INFO 03-02 00:14:10 [logger.py:42] Received request cmpl-aa87c221156f47b9ad3647628be4d75e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:10 [async_llm.py:261] Added request cmpl-aa87c221156f47b9ad3647628be4d75e-0.
INFO 03-02 00:14:11 [logger.py:42] Received request cmpl-ad902428333a44069bb91319a35f3da0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:11 [async_llm.py:261] Added request cmpl-ad902428333a44069bb91319a35f3da0-0.
INFO 03-02 00:14:12 [logger.py:42] Received request cmpl-0c01d8395aed443c810e8cf51ae224b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:12 [async_llm.py:261] Added request cmpl-0c01d8395aed443c810e8cf51ae224b9-0.
INFO 03-02 00:14:13 [logger.py:42] Received request cmpl-e0addcbd10734116872175c7c90976d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:13 [async_llm.py:261] Added request cmpl-e0addcbd10734116872175c7c90976d5-0.
INFO 03-02 00:14:15 [logger.py:42] Received request cmpl-a34746fddf8f47e9b26b5dc7c2bbf41a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:15 [async_llm.py:261] Added request cmpl-a34746fddf8f47e9b26b5dc7c2bbf41a-0.
INFO 03-02 00:14:16 [logger.py:42] Received request cmpl-2051840112f44f1eb614c0d4492d057e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:16 [async_llm.py:261] Added request cmpl-2051840112f44f1eb614c0d4492d057e-0.
INFO 03-02 00:14:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:14:17 [logger.py:42] Received request cmpl-02dcf4cd5b1b4ee1a7d44cff426e8167-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:17 [async_llm.py:261] Added request cmpl-02dcf4cd5b1b4ee1a7d44cff426e8167-0.
INFO 03-02 00:14:18 [logger.py:42] Received request cmpl-41768ef994c442408bbec815af71df24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:18 [async_llm.py:261] Added request cmpl-41768ef994c442408bbec815af71df24-0.
INFO 03-02 00:14:19 [logger.py:42] Received request cmpl-8ca65d9d4efc4e4c95eee04525ce9a04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:19 [async_llm.py:261] Added request cmpl-8ca65d9d4efc4e4c95eee04525ce9a04-0.
INFO 03-02 00:14:20 [logger.py:42] Received request cmpl-71e430f256504a0bbb93701221f5bd58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:20 [async_llm.py:261] Added request cmpl-71e430f256504a0bbb93701221f5bd58-0.
INFO 03-02 00:14:21 [logger.py:42] Received request cmpl-2182b723180244a0a823271b53afa3d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:21 [async_llm.py:261] Added request cmpl-2182b723180244a0a823271b53afa3d8-0.
INFO 03-02 00:14:22 [logger.py:42] Received request cmpl-ee89bb96ceb5457e9b61ffe80c097a79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:22 [async_llm.py:261] Added request cmpl-ee89bb96ceb5457e9b61ffe80c097a79-0.
INFO 03-02 00:14:23 [logger.py:42] Received request cmpl-60405ba0624543e4a440cfbef678cb24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:23 [async_llm.py:261] Added request cmpl-60405ba0624543e4a440cfbef678cb24-0.
INFO 03-02 00:14:24 [logger.py:42] Received request cmpl-7d1e08cfe43e485789396792b64544b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:24 [async_llm.py:261] Added request cmpl-7d1e08cfe43e485789396792b64544b1-0.
INFO 03-02 00:14:25 [logger.py:42] Received request cmpl-cba88dbc065844d28da48c337cfc597c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:25 [async_llm.py:261] Added request cmpl-cba88dbc065844d28da48c337cfc597c-0.
INFO 03-02 00:14:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:14:26 [logger.py:42] Received request cmpl-bd7656d1e8f74cba89e3e30a67ffe365-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:26 [async_llm.py:261] Added request cmpl-bd7656d1e8f74cba89e3e30a67ffe365-0.
INFO 03-02 00:14:28 [logger.py:42] Received request cmpl-4007aae250554217a0372f5ca7caf43d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:28 [async_llm.py:261] Added request cmpl-4007aae250554217a0372f5ca7caf43d-0.
INFO 03-02 00:14:29 [logger.py:42] Received request cmpl-53e6fd8e34c04e10897e1757d67fb592-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:29 [async_llm.py:261] Added request cmpl-53e6fd8e34c04e10897e1757d67fb592-0.
INFO 03-02 00:14:30 [logger.py:42] Received request cmpl-dcebbbf8c66649daa95de1b54218a95d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:30 [async_llm.py:261] Added request cmpl-dcebbbf8c66649daa95de1b54218a95d-0.
INFO 03-02 00:14:31 [logger.py:42] Received request cmpl-3137a8e6de9544d5829ae7ca5254b601-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:31 [async_llm.py:261] Added request cmpl-3137a8e6de9544d5829ae7ca5254b601-0.
INFO 03-02 00:14:32 [logger.py:42] Received request cmpl-0f5571c59b564f669506bbe965246ce0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:32 [async_llm.py:261] Added request cmpl-0f5571c59b564f669506bbe965246ce0-0.
INFO 03-02 00:14:33 [logger.py:42] Received request cmpl-ed09c35b079443bdaf27766362de7846-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:33 [async_llm.py:261] Added request cmpl-ed09c35b079443bdaf27766362de7846-0.
INFO 03-02 00:14:34 [logger.py:42] Received request cmpl-92a49ad2fa884627bfc433eed191b1d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:34 [async_llm.py:261] Added request cmpl-92a49ad2fa884627bfc433eed191b1d4-0.
INFO 03-02 00:14:35 [logger.py:42] Received request cmpl-f4b12f8c5a1f46b9992155f936ee0d13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:35 [async_llm.py:261] Added request cmpl-f4b12f8c5a1f46b9992155f936ee0d13-0.
INFO 03-02 00:14:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:14:36 [logger.py:42] Received request cmpl-9da8c409768141a496ccca24dc6f6f36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:36 [async_llm.py:261] Added request cmpl-9da8c409768141a496ccca24dc6f6f36-0.
INFO 03-02 00:14:37 [logger.py:42] Received request cmpl-d6688f413ba34fa5b24539d704d289b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:37 [async_llm.py:261] Added request cmpl-d6688f413ba34fa5b24539d704d289b0-0.
INFO 03-02 00:14:38 [logger.py:42] Received request cmpl-31a87001f1264d35aa3746469a76c28d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:38 [async_llm.py:261] Added request cmpl-31a87001f1264d35aa3746469a76c28d-0.
INFO 03-02 00:14:39 [logger.py:42] Received request cmpl-db56b31b99574b2b9ecf1d4564aad7d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:39 [async_llm.py:261] Added request cmpl-db56b31b99574b2b9ecf1d4564aad7d0-0.
INFO 03-02 00:14:41 [logger.py:42] Received request cmpl-36e5a15ddc514de798255ea33ab560b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:41 [async_llm.py:261] Added request cmpl-36e5a15ddc514de798255ea33ab560b7-0.
INFO 03-02 00:14:42 [logger.py:42] Received request cmpl-0bcae169ae324282aad3875f44e2aee0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:42 [async_llm.py:261] Added request cmpl-0bcae169ae324282aad3875f44e2aee0-0.
INFO 03-02 00:14:43 [logger.py:42] Received request cmpl-11e3e1ffc59d4db09b0395ec7b5bf077-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:43 [async_llm.py:261] Added request cmpl-11e3e1ffc59d4db09b0395ec7b5bf077-0.
INFO 03-02 00:14:44 [logger.py:42] Received request cmpl-2afae36d071a4d02a0692ef52cc3b55c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:44 [async_llm.py:261] Added request cmpl-2afae36d071a4d02a0692ef52cc3b55c-0.
INFO 03-02 00:14:45 [logger.py:42] Received request cmpl-98cf6dd220d14b16b6020ec242794aba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:45 [async_llm.py:261] Added request cmpl-98cf6dd220d14b16b6020ec242794aba-0.
INFO 03-02 00:14:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:14:46 [logger.py:42] Received request cmpl-27d51432ff5e4983ad5bb2860b152cb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:46 [async_llm.py:261] Added request cmpl-27d51432ff5e4983ad5bb2860b152cb8-0.
INFO 03-02 00:14:47 [logger.py:42] Received request cmpl-6e89908cef194b80bc1caccdd0b6b7d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:47 [async_llm.py:261] Added request cmpl-6e89908cef194b80bc1caccdd0b6b7d1-0.
INFO 03-02 00:14:48 [logger.py:42] Received request cmpl-53d3036e3fb34d6d8b896d4beb8dc60c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:48 [async_llm.py:261] Added request cmpl-53d3036e3fb34d6d8b896d4beb8dc60c-0.
INFO 03-02 00:14:49 [logger.py:42] Received request cmpl-6578806667404fa58f5e11782b219141-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:49 [async_llm.py:261] Added request cmpl-6578806667404fa58f5e11782b219141-0.
INFO 03-02 00:14:50 [logger.py:42] Received request cmpl-275fc97b0899425ca7d99f6b14f024f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:50 [async_llm.py:261] Added request cmpl-275fc97b0899425ca7d99f6b14f024f3-0.
INFO 03-02 00:14:51 [logger.py:42] Received request cmpl-6d3d0aff4eac44709baedb9b660115af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:51 [async_llm.py:261] Added request cmpl-6d3d0aff4eac44709baedb9b660115af-0.
INFO 03-02 00:14:52 [logger.py:42] Received request cmpl-5e4343be7d4342dc9e6f30a008dacbd0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:53 [async_llm.py:261] Added request cmpl-5e4343be7d4342dc9e6f30a008dacbd0-0.
INFO 03-02 00:14:54 [logger.py:42] Received request cmpl-2c7df74bf6ae4368890bc9ff3333c562-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:54 [async_llm.py:261] Added request cmpl-2c7df74bf6ae4368890bc9ff3333c562-0.
INFO 03-02 00:14:55 [logger.py:42] Received request cmpl-f8ca1bb1388a4f2dbe0cc43a132056b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:55 [async_llm.py:261] Added request cmpl-f8ca1bb1388a4f2dbe0cc43a132056b7-0.
INFO 03-02 00:14:56 [logger.py:42] Received request cmpl-38a05320d1bb412e8c4e1b983cb1f1f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:56 [async_llm.py:261] Added request cmpl-38a05320d1bb412e8c4e1b983cb1f1f9-0.
INFO 03-02 00:14:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:14:57 [logger.py:42] Received request cmpl-bf0a8afb09754e1b9da3a5033f206151-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:57 [async_llm.py:261] Added request cmpl-bf0a8afb09754e1b9da3a5033f206151-0.
INFO 03-02 00:14:58 [logger.py:42] Received request cmpl-8376089a7bd14166bb42e050f99fe775-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:58 [async_llm.py:261] Added request cmpl-8376089a7bd14166bb42e050f99fe775-0.
INFO 03-02 00:14:59 [logger.py:42] Received request cmpl-f51beb2f2cba434fa6bf45ca4a217d86-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:59 [async_llm.py:261] Added request cmpl-f51beb2f2cba434fa6bf45ca4a217d86-0.
INFO 03-02 00:15:00 [logger.py:42] Received request cmpl-1deb5d8b0267430b86d20780a06433b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:00 [async_llm.py:261] Added request cmpl-1deb5d8b0267430b86d20780a06433b7-0.
INFO 03-02 00:15:01 [logger.py:42] Received request cmpl-570bb718e3b7412e97db51997f6e16e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:01 [async_llm.py:261] Added request cmpl-570bb718e3b7412e97db51997f6e16e6-0.
INFO 03-02 00:15:02 [logger.py:42] Received request cmpl-7071bf44c9304e83a5e993003e44e885-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:02 [async_llm.py:261] Added request cmpl-7071bf44c9304e83a5e993003e44e885-0.
INFO 03-02 00:15:03 [logger.py:42] Received request cmpl-21d0c64a98de496a8ce7dea44bb393d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:03 [async_llm.py:261] Added request cmpl-21d0c64a98de496a8ce7dea44bb393d8-0.
INFO 03-02 00:15:04 [logger.py:42] Received request cmpl-c141478b233d4a0db6666fc76a849d27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:04 [async_llm.py:261] Added request cmpl-c141478b233d4a0db6666fc76a849d27-0.
INFO 03-02 00:15:06 [logger.py:42] Received request cmpl-c4c8f7b5f3c8406cb4ce9bafadf46b37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:06 [async_llm.py:261] Added request cmpl-c4c8f7b5f3c8406cb4ce9bafadf46b37-0.
INFO 03-02 00:15:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:15:07 [logger.py:42] Received request cmpl-8aea6c1061564c4ab29b3bbfd855079e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:07 [async_llm.py:261] Added request cmpl-8aea6c1061564c4ab29b3bbfd855079e-0.
INFO 03-02 00:15:08 [logger.py:42] Received request cmpl-b2327290b15543d2b7113bbcd0fa7197-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:08 [async_llm.py:261] Added request cmpl-b2327290b15543d2b7113bbcd0fa7197-0.
INFO 03-02 00:15:09 [logger.py:42] Received request cmpl-7cfd04c64aad46ad88c0ada02fd018a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:09 [async_llm.py:261] Added request cmpl-7cfd04c64aad46ad88c0ada02fd018a7-0.
INFO 03-02 00:15:10 [logger.py:42] Received request cmpl-3d7234503ec04fd2a6a1bf347ffeec5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:10 [async_llm.py:261] Added request cmpl-3d7234503ec04fd2a6a1bf347ffeec5a-0.
INFO 03-02 00:15:11 [logger.py:42] Received request cmpl-c3bc893b4d244c228c54b06c9ddb4316-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:11 [async_llm.py:261] Added request cmpl-c3bc893b4d244c228c54b06c9ddb4316-0.
INFO 03-02 00:15:12 [logger.py:42] Received request cmpl-84033e20fe2d45b68fc126ed82f6fd94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:12 [async_llm.py:261] Added request cmpl-84033e20fe2d45b68fc126ed82f6fd94-0.
INFO 03-02 00:15:13 [logger.py:42] Received request cmpl-8d23141790cf406395093e7f1a96e6d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:13 [async_llm.py:261] Added request cmpl-8d23141790cf406395093e7f1a96e6d2-0.
INFO 03-02 00:15:14 [logger.py:42] Received request cmpl-6f49684da1554dc0b23d65ce4f4adb07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:14 [async_llm.py:261] Added request cmpl-6f49684da1554dc0b23d65ce4f4adb07-0.
INFO 03-02 00:15:15 [logger.py:42] Received request cmpl-33201c4403d7454ea357c9891e56d150-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:15 [async_llm.py:261] Added request cmpl-33201c4403d7454ea357c9891e56d150-0.
INFO 03-02 00:15:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:15:16 [logger.py:42] Received request cmpl-ac839cc752f5406e8f63eecc5aeab171-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:16 [async_llm.py:261] Added request cmpl-ac839cc752f5406e8f63eecc5aeab171-0.
INFO 03-02 00:15:17 [logger.py:42] Received request cmpl-dc4c15e1f95b4a858640675247672fc8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:17 [async_llm.py:261] Added request cmpl-dc4c15e1f95b4a858640675247672fc8-0.
INFO 03-02 00:15:19 [logger.py:42] Received request cmpl-7fbaef78eeab4db98f796aff02b0266e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:19 [async_llm.py:261] Added request cmpl-7fbaef78eeab4db98f796aff02b0266e-0.
INFO 03-02 00:15:20 [logger.py:42] Received request cmpl-1ba10b62e30b4ba7af279c81054fd152-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:20 [async_llm.py:261] Added request cmpl-1ba10b62e30b4ba7af279c81054fd152-0.
INFO 03-02 00:15:21 [logger.py:42] Received request cmpl-2d3e3b1d342842419e682345e1a9148d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:21 [async_llm.py:261] Added request cmpl-2d3e3b1d342842419e682345e1a9148d-0.
INFO 03-02 00:15:22 [logger.py:42] Received request cmpl-3f0a53fe7e9e44659abb16b37379d78f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:22 [async_llm.py:261] Added request cmpl-3f0a53fe7e9e44659abb16b37379d78f-0.
INFO 03-02 00:15:23 [logger.py:42] Received request cmpl-38059712b48646f99b66e82341033a61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:23 [async_llm.py:261] Added request cmpl-38059712b48646f99b66e82341033a61-0.
INFO 03-02 00:15:24 [logger.py:42] Received request cmpl-9e1d940bbdb846f3b728ea00a677f445-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:24 [async_llm.py:261] Added request cmpl-9e1d940bbdb846f3b728ea00a677f445-0.
INFO 03-02 00:15:25 [logger.py:42] Received request cmpl-25f84325dc8c4d81b2463f9eac15bc1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:25 [async_llm.py:261] Added request cmpl-25f84325dc8c4d81b2463f9eac15bc1f-0.
INFO 03-02 00:15:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:15:26 [logger.py:42] Received request cmpl-6fe9b853971548899adae2dcc21d16ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:26 [async_llm.py:261] Added request cmpl-6fe9b853971548899adae2dcc21d16ca-0.
INFO 03-02 00:15:27 [logger.py:42] Received request cmpl-e191d68fc0b34d198ed5ade88aa55dc2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:27 [async_llm.py:261] Added request cmpl-e191d68fc0b34d198ed5ade88aa55dc2-0.
INFO 03-02 00:15:28 [logger.py:42] Received request cmpl-7489a4b5ac8a427385831e6133822e29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:28 [async_llm.py:261] Added request cmpl-7489a4b5ac8a427385831e6133822e29-0.
INFO 03-02 00:15:29 [logger.py:42] Received request cmpl-5441c55375024c9080d9b5c45cb46d26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:29 [async_llm.py:261] Added request cmpl-5441c55375024c9080d9b5c45cb46d26-0.
INFO 03-02 00:15:30 [logger.py:42] Received request cmpl-1a9af208ee064690b7a5bae8a8c3f2b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:30 [async_llm.py:261] Added request cmpl-1a9af208ee064690b7a5bae8a8c3f2b1-0.
INFO 03-02 00:15:32 [logger.py:42] Received request cmpl-d449389f6e944fa58810833b1f572a1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:32 [async_llm.py:261] Added request cmpl-d449389f6e944fa58810833b1f572a1f-0.
INFO 03-02 00:15:33 [logger.py:42] Received request cmpl-bc97f61f72a64c29abbbabae593761ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:33 [async_llm.py:261] Added request cmpl-bc97f61f72a64c29abbbabae593761ed-0.
INFO 03-02 00:15:34 [logger.py:42] Received request cmpl-9c3712a2c29e4c8486a523594eaf14f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:34 [async_llm.py:261] Added request cmpl-9c3712a2c29e4c8486a523594eaf14f5-0.
INFO 03-02 00:15:35 [logger.py:42] Received request cmpl-5e04549059f549d7bd9116db46bcfc94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:35 [async_llm.py:261] Added request cmpl-5e04549059f549d7bd9116db46bcfc94-0.
INFO 03-02 00:15:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:15:36 [logger.py:42] Received request cmpl-518e46b127084807b796b9cf04a8af82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:36 [async_llm.py:261] Added request cmpl-518e46b127084807b796b9cf04a8af82-0.
INFO 03-02 00:15:37 [logger.py:42] Received request cmpl-02c97bbbb3d145d794de7f163c4553de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:37 [async_llm.py:261] Added request cmpl-02c97bbbb3d145d794de7f163c4553de-0.
INFO 03-02 00:15:38 [logger.py:42] Received request cmpl-804f7ba98ede4c62881ac07e6f112be5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:38 [async_llm.py:261] Added request cmpl-804f7ba98ede4c62881ac07e6f112be5-0.
INFO 03-02 00:15:39 [logger.py:42] Received request cmpl-056dee867db9470689158e6823b71773-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:39 [async_llm.py:261] Added request cmpl-056dee867db9470689158e6823b71773-0.
INFO 03-02 00:15:40 [logger.py:42] Received request cmpl-8d62b80b200246719761ef54103f6e67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:40 [async_llm.py:261] Added request cmpl-8d62b80b200246719761ef54103f6e67-0.
INFO 03-02 00:15:41 [logger.py:42] Received request cmpl-db62b61838c447e4938d32660630c9d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:41 [async_llm.py:261] Added request cmpl-db62b61838c447e4938d32660630c9d9-0.
INFO 03-02 00:15:42 [logger.py:42] Received request cmpl-afbc617aeca24f31b40c9113f7154d77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:42 [async_llm.py:261] Added request cmpl-afbc617aeca24f31b40c9113f7154d77-0.
INFO 03-02 00:15:43 [logger.py:42] Received request cmpl-1b76226be4ab480188d8ccecdd25ed93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:43 [async_llm.py:261] Added request cmpl-1b76226be4ab480188d8ccecdd25ed93-0.
INFO 03-02 00:15:45 [logger.py:42] Received request cmpl-78c7a2ec78604e6eb2b15c7dc672327c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:45 [async_llm.py:261] Added request cmpl-78c7a2ec78604e6eb2b15c7dc672327c-0.
INFO 03-02 00:15:46 [logger.py:42] Received request cmpl-5d0825554b5a48abb35d13156b53407a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:46 [async_llm.py:261] Added request cmpl-5d0825554b5a48abb35d13156b53407a-0.
INFO 03-02 00:15:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:15:47 [logger.py:42] Received request cmpl-b0f3d5e0ea46483e8c22572a80cf7236-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:47 [async_llm.py:261] Added request cmpl-b0f3d5e0ea46483e8c22572a80cf7236-0.
INFO 03-02 00:15:48 [logger.py:42] Received request cmpl-1e2049bb8ead45f5a54285446384cb29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:48 [async_llm.py:261] Added request cmpl-1e2049bb8ead45f5a54285446384cb29-0.
INFO 03-02 00:15:49 [logger.py:42] Received request cmpl-cb0d800ca9c94ddba904060ca9196e4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:49 [async_llm.py:261] Added request cmpl-cb0d800ca9c94ddba904060ca9196e4c-0.
INFO 03-02 00:15:50 [logger.py:42] Received request cmpl-ca5091c778c648be943078ca226b2f2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:50 [async_llm.py:261] Added request cmpl-ca5091c778c648be943078ca226b2f2d-0.
INFO 03-02 00:15:51 [logger.py:42] Received request cmpl-15be6222992f4e5294f208ec9b93fd4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:51 [async_llm.py:261] Added request cmpl-15be6222992f4e5294f208ec9b93fd4f-0.
INFO 03-02 00:15:52 [logger.py:42] Received request cmpl-1ad2e3d27d704fcf8f664e98543799b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:52 [async_llm.py:261] Added request cmpl-1ad2e3d27d704fcf8f664e98543799b2-0.
INFO 03-02 00:15:53 [logger.py:42] Received request cmpl-6dd9027a6da04a1583dbefc2b8400281-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:53 [async_llm.py:261] Added request cmpl-6dd9027a6da04a1583dbefc2b8400281-0.
INFO 03-02 00:15:54 [logger.py:42] Received request cmpl-620a06f85b0045098aded9499c42221e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:54 [async_llm.py:261] Added request cmpl-620a06f85b0045098aded9499c42221e-0.
INFO 03-02 00:15:55 [logger.py:42] Received request cmpl-aa8f142c559b4005b073e68809bfc304-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:55 [async_llm.py:261] Added request cmpl-aa8f142c559b4005b073e68809bfc304-0.
INFO 03-02 00:15:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:15:56 [logger.py:42] Received request cmpl-c08a3d3348054e94aa3f532eb566217e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:56 [async_llm.py:261] Added request cmpl-c08a3d3348054e94aa3f532eb566217e-0.
INFO 03-02 00:15:58 [logger.py:42] Received request cmpl-b545abcebad04493b487da85acee00e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:58 [async_llm.py:261] Added request cmpl-b545abcebad04493b487da85acee00e2-0.
INFO 03-02 00:15:59 [logger.py:42] Received request cmpl-b371b7539eec4795bb86e75a272b2416-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:59 [async_llm.py:261] Added request cmpl-b371b7539eec4795bb86e75a272b2416-0.
INFO 03-02 00:16:00 [logger.py:42] Received request cmpl-4b24e02048b342e6bb5d92927500e602-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:00 [async_llm.py:261] Added request cmpl-4b24e02048b342e6bb5d92927500e602-0.
INFO 03-02 00:16:01 [logger.py:42] Received request cmpl-e26dd4867b964bfd97b9b9e872ce6066-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:01 [async_llm.py:261] Added request cmpl-e26dd4867b964bfd97b9b9e872ce6066-0.
INFO 03-02 00:16:02 [logger.py:42] Received request cmpl-b005bc5a584e448f9cb88237054dc3e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:02 [async_llm.py:261] Added request cmpl-b005bc5a584e448f9cb88237054dc3e0-0.
INFO 03-02 00:16:03 [logger.py:42] Received request cmpl-c517c6bf9df54831a4f4e4ad77f084b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:03 [async_llm.py:261] Added request cmpl-c517c6bf9df54831a4f4e4ad77f084b7-0.
INFO 03-02 00:16:04 [logger.py:42] Received request cmpl-54482e4024a2488990a020bc4c117298-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:04 [async_llm.py:261] Added request cmpl-54482e4024a2488990a020bc4c117298-0.
INFO 03-02 00:16:05 [logger.py:42] Received request cmpl-0702ad1e5a074a638d244944e21c8a67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:05 [async_llm.py:261] Added request cmpl-0702ad1e5a074a638d244944e21c8a67-0.
INFO 03-02 00:16:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:16:06 [logger.py:42] Received request cmpl-c93243d8a8c64e2a9bd8b1aacdebd38d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:06 [async_llm.py:261] Added request cmpl-c93243d8a8c64e2a9bd8b1aacdebd38d-0.
INFO 03-02 00:16:07 [logger.py:42] Received request cmpl-633281c645a04c69bec1bb58b3014740-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:07 [async_llm.py:261] Added request cmpl-633281c645a04c69bec1bb58b3014740-0.
INFO 03-02 00:16:08 [logger.py:42] Received request cmpl-ab6279e426dc4a73b535cf278fc147d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:08 [async_llm.py:261] Added request cmpl-ab6279e426dc4a73b535cf278fc147d2-0.
INFO 03-02 00:16:09 [logger.py:42] Received request cmpl-6328f3c06fc6467dbc66312df9b02bca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:09 [async_llm.py:261] Added request cmpl-6328f3c06fc6467dbc66312df9b02bca-0.
INFO 03-02 00:16:11 [logger.py:42] Received request cmpl-9ff3caad943c4065bb010eb17f29be2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:11 [async_llm.py:261] Added request cmpl-9ff3caad943c4065bb010eb17f29be2f-0.
INFO 03-02 00:16:12 [logger.py:42] Received request cmpl-313e057daa4d44239cecb7335ebeb65e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:12 [async_llm.py:261] Added request cmpl-313e057daa4d44239cecb7335ebeb65e-0.
INFO 03-02 00:16:13 [logger.py:42] Received request cmpl-1eb74fda81be44028d37a973b1db0aef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:13 [async_llm.py:261] Added request cmpl-1eb74fda81be44028d37a973b1db0aef-0.
INFO 03-02 00:16:14 [logger.py:42] Received request cmpl-cf5fda1b86c04eceb57840c610213139-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:14 [async_llm.py:261] Added request cmpl-cf5fda1b86c04eceb57840c610213139-0.
INFO 03-02 00:16:15 [logger.py:42] Received request cmpl-22420b05bf5f4045983e1382890f7e69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:15 [async_llm.py:261] Added request cmpl-22420b05bf5f4045983e1382890f7e69-0.
INFO 03-02 00:16:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:16:16 [logger.py:42] Received request cmpl-2e9162c8c5c0409699f3ae52ab2a47d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:16 [async_llm.py:261] Added request cmpl-2e9162c8c5c0409699f3ae52ab2a47d0-0.
INFO 03-02 00:16:17 [logger.py:42] Received request cmpl-5ebef875159d48cdb9a7bda23d53a722-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:17 [async_llm.py:261] Added request cmpl-5ebef875159d48cdb9a7bda23d53a722-0.
INFO 03-02 00:16:18 [logger.py:42] Received request cmpl-a7e7b7fd1619447baa4d362377fd29b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:18 [async_llm.py:261] Added request cmpl-a7e7b7fd1619447baa4d362377fd29b2-0.
INFO 03-02 00:16:19 [logger.py:42] Received request cmpl-d84f757d00ae4863b9fc0511b7404efd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:19 [async_llm.py:261] Added request cmpl-d84f757d00ae4863b9fc0511b7404efd-0.
INFO 03-02 00:16:20 [logger.py:42] Received request cmpl-f09ae7b5c0664c93936e00eca51c1b5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:20 [async_llm.py:261] Added request cmpl-f09ae7b5c0664c93936e00eca51c1b5e-0.
INFO 03-02 00:16:21 [logger.py:42] Received request cmpl-ca9cb9e6be674c858a5a252d1c9958d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:21 [async_llm.py:261] Added request cmpl-ca9cb9e6be674c858a5a252d1c9958d6-0.
INFO 03-02 00:16:22 [logger.py:42] Received request cmpl-7fe05802d343437b86e1da1a8c6010f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:22 [async_llm.py:261] Added request cmpl-7fe05802d343437b86e1da1a8c6010f3-0.
INFO 03-02 00:16:24 [logger.py:42] Received request cmpl-6d03802f936b4cda8b4a8e6edd4be0b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:24 [async_llm.py:261] Added request cmpl-6d03802f936b4cda8b4a8e6edd4be0b3-0.
INFO 03-02 00:16:25 [logger.py:42] Received request cmpl-a6d8217e9ce846278fbbe3b066520364-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:25 [async_llm.py:261] Added request cmpl-a6d8217e9ce846278fbbe3b066520364-0.
INFO 03-02 00:16:26 [logger.py:42] Received request cmpl-c50ba8905cd441d7a2354270a1ba6f1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:26 [async_llm.py:261] Added request cmpl-c50ba8905cd441d7a2354270a1ba6f1d-0.
INFO 03-02 00:16:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:16:27 [logger.py:42] Received request cmpl-a46312392d044d68a1c4b082791c1380-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:27 [async_llm.py:261] Added request cmpl-a46312392d044d68a1c4b082791c1380-0.
INFO 03-02 00:16:28 [logger.py:42] Received request cmpl-e63d974a6a224f35bc3976cae89a8009-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:28 [async_llm.py:261] Added request cmpl-e63d974a6a224f35bc3976cae89a8009-0.
INFO 03-02 00:16:29 [logger.py:42] Received request cmpl-87587aafa9af4ab6a04d2bf03b790643-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:29 [async_llm.py:261] Added request cmpl-87587aafa9af4ab6a04d2bf03b790643-0.
INFO 03-02 00:16:30 [logger.py:42] Received request cmpl-815a3a9ff92b4c43b3e47a10c2d656ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:30 [async_llm.py:261] Added request cmpl-815a3a9ff92b4c43b3e47a10c2d656ac-0.
INFO 03-02 00:16:31 [logger.py:42] Received request cmpl-9804ef1048c94efd9097fc6606ac173f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:31 [async_llm.py:261] Added request cmpl-9804ef1048c94efd9097fc6606ac173f-0.
INFO 03-02 00:16:32 [logger.py:42] Received request cmpl-b1e40ca5fb784ef6855f6bbb5728da2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:32 [async_llm.py:261] Added request cmpl-b1e40ca5fb784ef6855f6bbb5728da2b-0.
INFO 03-02 00:16:33 [logger.py:42] Received request cmpl-84ec015b737e43bca9559a0e718ec860-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:33 [async_llm.py:261] Added request cmpl-84ec015b737e43bca9559a0e718ec860-0.
INFO 03-02 00:16:34 [logger.py:42] Received request cmpl-a42f5ca3e1e346dda279c3124eb324eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:34 [async_llm.py:261] Added request cmpl-a42f5ca3e1e346dda279c3124eb324eb-0.
INFO 03-02 00:16:35 [logger.py:42] Received request cmpl-89a97f2e24fe4230bc584da8105f8bb1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:35 [async_llm.py:261] Added request cmpl-89a97f2e24fe4230bc584da8105f8bb1-0.
INFO 03-02 00:16:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:16:37 [logger.py:42] Received request cmpl-7288eda60f9c42279cc4169b8549d681-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:37 [async_llm.py:261] Added request cmpl-7288eda60f9c42279cc4169b8549d681-0.
INFO 03-02 00:16:38 [logger.py:42] Received request cmpl-9128ec12a0834b49bd0ee45610de5bd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:38 [async_llm.py:261] Added request cmpl-9128ec12a0834b49bd0ee45610de5bd8-0.
INFO 03-02 00:16:39 [logger.py:42] Received request cmpl-f5ea3d702be4416cb9a02f38bb41686f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:39 [async_llm.py:261] Added request cmpl-f5ea3d702be4416cb9a02f38bb41686f-0.
INFO 03-02 00:16:40 [logger.py:42] Received request cmpl-a51d35ea059245d4a425b83d9855dd2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:40 [async_llm.py:261] Added request cmpl-a51d35ea059245d4a425b83d9855dd2d-0.
INFO 03-02 00:16:41 [logger.py:42] Received request cmpl-300f153950d145cbbfaacd8096e1438b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:41 [async_llm.py:261] Added request cmpl-300f153950d145cbbfaacd8096e1438b-0.
INFO 03-02 00:16:42 [logger.py:42] Received request cmpl-8e2453e375394465b21f33b9604478df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:42 [async_llm.py:261] Added request cmpl-8e2453e375394465b21f33b9604478df-0.
INFO 03-02 00:16:43 [logger.py:42] Received request cmpl-d248b21154964a5aa832ee421c9755ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:43 [async_llm.py:261] Added request cmpl-d248b21154964a5aa832ee421c9755ce-0.
INFO 03-02 00:16:44 [logger.py:42] Received request cmpl-fc32a7d701304771809ff2acd4d18412-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:44 [async_llm.py:261] Added request cmpl-fc32a7d701304771809ff2acd4d18412-0.
INFO 03-02 00:16:45 [logger.py:42] Received request cmpl-f60afa6f34654439a1e4251c626656b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:45 [async_llm.py:261] Added request cmpl-f60afa6f34654439a1e4251c626656b2-0.
INFO 03-02 00:16:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:16:46 [logger.py:42] Received request cmpl-9e65f54674c2475cb9c8899bf891f609-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:46 [async_llm.py:261] Added request cmpl-9e65f54674c2475cb9c8899bf891f609-0.
INFO 03-02 00:16:47 [logger.py:42] Received request cmpl-e8d04d3656834c8f9188d2ddb52543f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:47 [async_llm.py:261] Added request cmpl-e8d04d3656834c8f9188d2ddb52543f6-0.
INFO 03-02 00:16:48 [logger.py:42] Received request cmpl-dc019205f37d4a2aa09bccf3bf7a2ef0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:48 [async_llm.py:261] Added request cmpl-dc019205f37d4a2aa09bccf3bf7a2ef0-0.
INFO 03-02 00:16:50 [logger.py:42] Received request cmpl-102ad461ab8c4b3ea25bb1b1227f7e77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:50 [async_llm.py:261] Added request cmpl-102ad461ab8c4b3ea25bb1b1227f7e77-0.
INFO 03-02 00:16:51 [logger.py:42] Received request cmpl-d26d982f16fc44408bb3b9c45662136e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:51 [async_llm.py:261] Added request cmpl-d26d982f16fc44408bb3b9c45662136e-0.
INFO 03-02 00:16:52 [logger.py:42] Received request cmpl-674f04dec3ef4fbe866b47c850892043-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:52 [async_llm.py:261] Added request cmpl-674f04dec3ef4fbe866b47c850892043-0.
INFO 03-02 00:16:53 [logger.py:42] Received request cmpl-5e570aece41e4bb39939bc8c8e8b4634-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:53 [async_llm.py:261] Added request cmpl-5e570aece41e4bb39939bc8c8e8b4634-0.
INFO 03-02 00:16:54 [logger.py:42] Received request cmpl-053abac5de7541e8b3b002ba4e13f8be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:54 [async_llm.py:261] Added request cmpl-053abac5de7541e8b3b002ba4e13f8be-0.
INFO 03-02 00:16:55 [logger.py:42] Received request cmpl-e593f28238394dfaa5f2887f34ba1604-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:55 [async_llm.py:261] Added request cmpl-e593f28238394dfaa5f2887f34ba1604-0.
INFO 03-02 00:16:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:16:56 [logger.py:42] Received request cmpl-425f226356d14dbba54a551d657a3eb1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:56 [async_llm.py:261] Added request cmpl-425f226356d14dbba54a551d657a3eb1-0.
INFO 03-02 00:16:57 [logger.py:42] Received request cmpl-2c274a77371143bfb3403751918b6a11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:57 [async_llm.py:261] Added request cmpl-2c274a77371143bfb3403751918b6a11-0.
INFO 03-02 00:16:58 [logger.py:42] Received request cmpl-6dbf9211aebe47b9987426e701332fc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:58 [async_llm.py:261] Added request cmpl-6dbf9211aebe47b9987426e701332fc7-0.
INFO 03-02 00:16:59 [logger.py:42] Received request cmpl-2fee463b571947b39ac20193b46ba7cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:59 [async_llm.py:261] Added request cmpl-2fee463b571947b39ac20193b46ba7cd-0.
INFO 03-02 00:17:00 [logger.py:42] Received request cmpl-cc37a00b7a324c82b4b18c642b9467b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:00 [async_llm.py:261] Added request cmpl-cc37a00b7a324c82b4b18c642b9467b6-0.
INFO 03-02 00:17:01 [logger.py:42] Received request cmpl-cb8156a2c44041228a0898ff3a1a11f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:01 [async_llm.py:261] Added request cmpl-cb8156a2c44041228a0898ff3a1a11f5-0.
INFO 03-02 00:17:03 [logger.py:42] Received request cmpl-5043f256a66e4dc2bc19bb0943973204-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:03 [async_llm.py:261] Added request cmpl-5043f256a66e4dc2bc19bb0943973204-0.
INFO 03-02 00:17:04 [logger.py:42] Received request cmpl-b3ab8c766a9f4019bdf0f4955cd9e7d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:04 [async_llm.py:261] Added request cmpl-b3ab8c766a9f4019bdf0f4955cd9e7d6-0.
INFO 03-02 00:17:05 [logger.py:42] Received request cmpl-3d5025d35e474a1ba14f5db8027152a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:05 [async_llm.py:261] Added request cmpl-3d5025d35e474a1ba14f5db8027152a7-0.
INFO 03-02 00:17:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:17:06 [logger.py:42] Received request cmpl-01be0b3c39e8424c8fbb23a8ee5a9494-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:06 [async_llm.py:261] Added request cmpl-01be0b3c39e8424c8fbb23a8ee5a9494-0.
INFO 03-02 00:17:07 [logger.py:42] Received request cmpl-1c8ced1a9ae849b18825158c365a8f94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:07 [async_llm.py:261] Added request cmpl-1c8ced1a9ae849b18825158c365a8f94-0.
INFO 03-02 00:17:08 [logger.py:42] Received request cmpl-99e101c8350f4bab8fb04981e6ce17b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:08 [async_llm.py:261] Added request cmpl-99e101c8350f4bab8fb04981e6ce17b8-0.
INFO 03-02 00:17:09 [logger.py:42] Received request cmpl-26b90183a7b74076a13749920a05ed2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:09 [async_llm.py:261] Added request cmpl-26b90183a7b74076a13749920a05ed2f-0.
INFO 03-02 00:17:10 [logger.py:42] Received request cmpl-49fa8a3b1734494a92f04129c418bfed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:10 [async_llm.py:261] Added request cmpl-49fa8a3b1734494a92f04129c418bfed-0.
INFO 03-02 00:17:11 [logger.py:42] Received request cmpl-10c332fc87104ccba573825c54ddfb39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:11 [async_llm.py:261] Added request cmpl-10c332fc87104ccba573825c54ddfb39-0.
INFO 03-02 00:17:12 [logger.py:42] Received request cmpl-4411c0dd3641435fb7ea7065eade7613-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:12 [async_llm.py:261] Added request cmpl-4411c0dd3641435fb7ea7065eade7613-0.
INFO 03-02 00:17:13 [logger.py:42] Received request cmpl-2df4082fc5ab4b688cf2d10edf14d90c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:13 [async_llm.py:261] Added request cmpl-2df4082fc5ab4b688cf2d10edf14d90c-0.
INFO 03-02 00:17:14 [logger.py:42] Received request cmpl-ca3e6cf080eb4fa48ee36a82b2a29aa0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:14 [async_llm.py:261] Added request cmpl-ca3e6cf080eb4fa48ee36a82b2a29aa0-0.
INFO 03-02 00:17:16 [logger.py:42] Received request cmpl-e4b2f1ef08a84a4e9e9f2786d6cd687d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:16 [async_llm.py:261] Added request cmpl-e4b2f1ef08a84a4e9e9f2786d6cd687d-0.
INFO 03-02 00:17:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:17:17 [logger.py:42] Received request cmpl-8c7f1ae2c92841cda46862784d7a1bd2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:17 [async_llm.py:261] Added request cmpl-8c7f1ae2c92841cda46862784d7a1bd2-0.
INFO 03-02 00:17:18 [logger.py:42] Received request cmpl-8375184fce794385aa96266af53420e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:18 [async_llm.py:261] Added request cmpl-8375184fce794385aa96266af53420e0-0.
INFO 03-02 00:17:19 [logger.py:42] Received request cmpl-36c6780e0579448ea73d4c6560a9abc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:19 [async_llm.py:261] Added request cmpl-36c6780e0579448ea73d4c6560a9abc1-0.
INFO 03-02 00:17:20 [logger.py:42] Received request cmpl-960fa10995ed4c839d94f82b2a034395-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:20 [async_llm.py:261] Added request cmpl-960fa10995ed4c839d94f82b2a034395-0.
INFO 03-02 00:17:21 [logger.py:42] Received request cmpl-c6ef068cfa16464f9eed7d5b80f7a181-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:21 [async_llm.py:261] Added request cmpl-c6ef068cfa16464f9eed7d5b80f7a181-0.
INFO 03-02 00:17:22 [logger.py:42] Received request cmpl-b9052621b2a544d38aa029654e0c0348-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:22 [async_llm.py:261] Added request cmpl-b9052621b2a544d38aa029654e0c0348-0.
INFO 03-02 00:17:23 [logger.py:42] Received request cmpl-c36889a94af14aa6a1421e52b79761dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:23 [async_llm.py:261] Added request cmpl-c36889a94af14aa6a1421e52b79761dc-0.
INFO 03-02 00:17:24 [logger.py:42] Received request cmpl-d52f53ff39cb464a9e80a0e26bc73ae3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:24 [async_llm.py:261] Added request cmpl-d52f53ff39cb464a9e80a0e26bc73ae3-0.
INFO 03-02 00:17:25 [logger.py:42] Received request cmpl-de446af49be94747b9535680d365c0d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:25 [async_llm.py:261] Added request cmpl-de446af49be94747b9535680d365c0d8-0.
INFO 03-02 00:17:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:17:26 [logger.py:42] Received request cmpl-ddfd45efa84b4b739146793900af3b10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:26 [async_llm.py:261] Added request cmpl-ddfd45efa84b4b739146793900af3b10-0.
INFO 03-02 00:17:27 [logger.py:42] Received request cmpl-e4e9ebe716b44ebdb4567fa72fec77cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:27 [async_llm.py:261] Added request cmpl-e4e9ebe716b44ebdb4567fa72fec77cb-0.
INFO 03-02 00:17:29 [logger.py:42] Received request cmpl-6893df5281a34be292d8818b088ab4f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:29 [async_llm.py:261] Added request cmpl-6893df5281a34be292d8818b088ab4f2-0.
INFO 03-02 00:17:30 [logger.py:42] Received request cmpl-ae2202c0bb424483b419ed3d5b78eb36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:30 [async_llm.py:261] Added request cmpl-ae2202c0bb424483b419ed3d5b78eb36-0.
INFO 03-02 00:17:31 [logger.py:42] Received request cmpl-95acef2ed4c549bcb61dd19b6834673f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:31 [async_llm.py:261] Added request cmpl-95acef2ed4c549bcb61dd19b6834673f-0.
INFO 03-02 00:17:32 [logger.py:42] Received request cmpl-eb78f69ff67146d48b3d62ccde2a3630-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:32 [async_llm.py:261] Added request cmpl-eb78f69ff67146d48b3d62ccde2a3630-0.
INFO 03-02 00:17:33 [logger.py:42] Received request cmpl-418399cdd94348249234a5a13f8e3bc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:33 [async_llm.py:261] Added request cmpl-418399cdd94348249234a5a13f8e3bc3-0.
INFO 03-02 00:17:34 [logger.py:42] Received request cmpl-46492dd5a39248ac9417df2b2696830e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:34 [async_llm.py:261] Added request cmpl-46492dd5a39248ac9417df2b2696830e-0.
INFO 03-02 00:17:35 [logger.py:42] Received request cmpl-a57b3cfcf35d4765a2b7875406b09296-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:35 [async_llm.py:261] Added request cmpl-a57b3cfcf35d4765a2b7875406b09296-0.
INFO 03-02 00:17:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:17:36 [logger.py:42] Received request cmpl-2f72ca605b384d46bc3f2f00b316fedf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:36 [async_llm.py:261] Added request cmpl-2f72ca605b384d46bc3f2f00b316fedf-0.
INFO 03-02 00:17:37 [logger.py:42] Received request cmpl-d4c0735ddee949edb020458220165de4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:37 [async_llm.py:261] Added request cmpl-d4c0735ddee949edb020458220165de4-0.
INFO 03-02 00:17:38 [logger.py:42] Received request cmpl-067e038e5db7464b9393300862911b2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:38 [async_llm.py:261] Added request cmpl-067e038e5db7464b9393300862911b2a-0.
INFO 03-02 00:17:39 [logger.py:42] Received request cmpl-9c85018f27b947929f5fa664f844db39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:39 [async_llm.py:261] Added request cmpl-9c85018f27b947929f5fa664f844db39-0.
INFO 03-02 00:17:40 [logger.py:42] Received request cmpl-191e29b2d7144b8595a8038a6174bf8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:40 [async_llm.py:261] Added request cmpl-191e29b2d7144b8595a8038a6174bf8b-0.
INFO 03-02 00:17:42 [logger.py:42] Received request cmpl-e867b1e9bb584616ad47697f0b426401-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:42 [async_llm.py:261] Added request cmpl-e867b1e9bb584616ad47697f0b426401-0.
INFO 03-02 00:17:43 [logger.py:42] Received request cmpl-8ec9ac3326ab4e3c830212953ecadd2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:43 [async_llm.py:261] Added request cmpl-8ec9ac3326ab4e3c830212953ecadd2f-0.
INFO 03-02 00:17:44 [logger.py:42] Received request cmpl-20c7f274cf104f28b83b34deca3a1ffe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:44 [async_llm.py:261] Added request cmpl-20c7f274cf104f28b83b34deca3a1ffe-0.
INFO 03-02 00:17:45 [logger.py:42] Received request cmpl-6385f30ef9734fd6863c887e9d86a775-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:45 [async_llm.py:261] Added request cmpl-6385f30ef9734fd6863c887e9d86a775-0.
INFO 03-02 00:17:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:17:46 [logger.py:42] Received request cmpl-14078e9b9fac42d298f73b683f42374f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:46 [async_llm.py:261] Added request cmpl-14078e9b9fac42d298f73b683f42374f-0.
INFO 03-02 00:17:47 [logger.py:42] Received request cmpl-ce8c0edd1e9e4489877d1eef087ad0ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:47 [async_llm.py:261] Added request cmpl-ce8c0edd1e9e4489877d1eef087ad0ee-0.
INFO 03-02 00:17:48 [logger.py:42] Received request cmpl-4d578b7f37a14cdcb034c55cb4cb305d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:48 [async_llm.py:261] Added request cmpl-4d578b7f37a14cdcb034c55cb4cb305d-0.
INFO 03-02 00:17:49 [logger.py:42] Received request cmpl-2bac573024354f6aa459cbcfb56e2dbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:49 [async_llm.py:261] Added request cmpl-2bac573024354f6aa459cbcfb56e2dbe-0.
INFO 03-02 00:17:50 [logger.py:42] Received request cmpl-1d0f0836b2f44bc4b6ec335e936c9a0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:50 [async_llm.py:261] Added request cmpl-1d0f0836b2f44bc4b6ec335e936c9a0c-0.
INFO 03-02 00:17:51 [logger.py:42] Received request cmpl-6b3c15d2ee1344b6b69db95c3da5a74f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:51 [async_llm.py:261] Added request cmpl-6b3c15d2ee1344b6b69db95c3da5a74f-0.
INFO 03-02 00:17:52 [logger.py:42] Received request cmpl-f5052ddc8f834288b8c2f5f53e3a8a1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:52 [async_llm.py:261] Added request cmpl-f5052ddc8f834288b8c2f5f53e3a8a1c-0.
INFO 03-02 00:17:53 [logger.py:42] Received request cmpl-06c5a11037d940b68f2922274acef19f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:53 [async_llm.py:261] Added request cmpl-06c5a11037d940b68f2922274acef19f-0.
INFO 03-02 00:17:55 [logger.py:42] Received request cmpl-3c1c272ec29f4b7d817555cddacbd0c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:55 [async_llm.py:261] Added request cmpl-3c1c272ec29f4b7d817555cddacbd0c6-0.
INFO 03-02 00:17:56 [logger.py:42] Received request cmpl-09e10af1765f4eb6b87282fe553f9df1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:56 [async_llm.py:261] Added request cmpl-09e10af1765f4eb6b87282fe553f9df1-0.
INFO 03-02 00:17:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:17:57 [logger.py:42] Received request cmpl-9bc073a95d8d4172a020d344c7379f7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:57 [async_llm.py:261] Added request cmpl-9bc073a95d8d4172a020d344c7379f7c-0.
INFO 03-02 00:17:58 [logger.py:42] Received request cmpl-747e258bb99245649b82612cb70ed813-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:58 [async_llm.py:261] Added request cmpl-747e258bb99245649b82612cb70ed813-0.
INFO 03-02 00:17:59 [logger.py:42] Received request cmpl-5ad7171cc66e471b9fcaccce1e986318-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:59 [async_llm.py:261] Added request cmpl-5ad7171cc66e471b9fcaccce1e986318-0.
INFO 03-02 00:18:00 [logger.py:42] Received request cmpl-e6d762eec8214f9baf4e485a1736bf32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:00 [async_llm.py:261] Added request cmpl-e6d762eec8214f9baf4e485a1736bf32-0.
INFO 03-02 00:18:01 [logger.py:42] Received request cmpl-2124cb01b2f04da08fb6a93d332860fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:01 [async_llm.py:261] Added request cmpl-2124cb01b2f04da08fb6a93d332860fe-0.
INFO 03-02 00:18:02 [logger.py:42] Received request cmpl-f28c8e715e814477bb119fbd24902300-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:02 [async_llm.py:261] Added request cmpl-f28c8e715e814477bb119fbd24902300-0.
INFO 03-02 00:18:03 [logger.py:42] Received request cmpl-d0da21aa091b415181dc37ba9f0cd552-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:03 [async_llm.py:261] Added request cmpl-d0da21aa091b415181dc37ba9f0cd552-0.
INFO 03-02 00:18:04 [logger.py:42] Received request cmpl-50692f9fefce4ba1ba0739138d711ea2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:04 [async_llm.py:261] Added request cmpl-50692f9fefce4ba1ba0739138d711ea2-0.
INFO 03-02 00:18:05 [logger.py:42] Received request cmpl-ea086e4fa36c4279b3c76de4a93af47c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:05 [async_llm.py:261] Added request cmpl-ea086e4fa36c4279b3c76de4a93af47c-0.
INFO 03-02 00:18:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:18:06 [logger.py:42] Received request cmpl-dc04050041ca4f4f96a3fe9adaaf4dd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:06 [async_llm.py:261] Added request cmpl-dc04050041ca4f4f96a3fe9adaaf4dd3-0.
INFO 03-02 00:18:08 [logger.py:42] Received request cmpl-feebe1f466f8424786f3502186e8adf9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:08 [async_llm.py:261] Added request cmpl-feebe1f466f8424786f3502186e8adf9-0.
INFO 03-02 00:18:09 [logger.py:42] Received request cmpl-ea8e7a0b43cc4dec9a3f5eb1e73ea294-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:09 [async_llm.py:261] Added request cmpl-ea8e7a0b43cc4dec9a3f5eb1e73ea294-0.
INFO 03-02 00:18:10 [logger.py:42] Received request cmpl-f75c6f83434a4964ba4909a151e3d143-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:10 [async_llm.py:261] Added request cmpl-f75c6f83434a4964ba4909a151e3d143-0.
INFO 03-02 00:18:11 [logger.py:42] Received request cmpl-5da33a1d758049eba53f77a71061d36a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:11 [async_llm.py:261] Added request cmpl-5da33a1d758049eba53f77a71061d36a-0.
INFO 03-02 00:18:12 [logger.py:42] Received request cmpl-97ce878e039747deab3b8501c20bec77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:12 [async_llm.py:261] Added request cmpl-97ce878e039747deab3b8501c20bec77-0.
INFO 03-02 00:18:13 [logger.py:42] Received request cmpl-06ec31b37dbe47f09c4226351cd98813-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:13 [async_llm.py:261] Added request cmpl-06ec31b37dbe47f09c4226351cd98813-0.
INFO 03-02 00:18:14 [logger.py:42] Received request cmpl-d1cc7dc18a024715b53fcaf498054e20-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:14 [async_llm.py:261] Added request cmpl-d1cc7dc18a024715b53fcaf498054e20-0.
INFO 03-02 00:18:15 [logger.py:42] Received request cmpl-bca1982c61c44db9846cca28639c0fae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:15 [async_llm.py:261] Added request cmpl-bca1982c61c44db9846cca28639c0fae-0.
INFO 03-02 00:18:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:18:16 [logger.py:42] Received request cmpl-f78c652e2734472aa341f60208a42f9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:16 [async_llm.py:261] Added request cmpl-f78c652e2734472aa341f60208a42f9e-0.
INFO 03-02 00:18:17 [logger.py:42] Received request cmpl-2fab7a89ce0b4baca0d781a6023945ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:17 [async_llm.py:261] Added request cmpl-2fab7a89ce0b4baca0d781a6023945ff-0.
INFO 03-02 00:18:18 [logger.py:42] Received request cmpl-7e00095105d740d69687140c0ae03be5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:18 [async_llm.py:261] Added request cmpl-7e00095105d740d69687140c0ae03be5-0.
INFO 03-02 00:18:19 [logger.py:42] Received request cmpl-7f13afbe73f74be3aec3bb9656f12ee9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:19 [async_llm.py:261] Added request cmpl-7f13afbe73f74be3aec3bb9656f12ee9-0.
INFO 03-02 00:18:21 [logger.py:42] Received request cmpl-982fae8bba0e471fabaffee8aced2d3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:21 [async_llm.py:261] Added request cmpl-982fae8bba0e471fabaffee8aced2d3d-0.
INFO 03-02 00:18:22 [logger.py:42] Received request cmpl-6a748cc170f348a0bbe25a66a0d9fe52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:22 [async_llm.py:261] Added request cmpl-6a748cc170f348a0bbe25a66a0d9fe52-0.
INFO 03-02 00:18:23 [logger.py:42] Received request cmpl-fccb4851e68b4f1bab19c7ee9930b9d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:23 [async_llm.py:261] Added request cmpl-fccb4851e68b4f1bab19c7ee9930b9d4-0.
INFO 03-02 00:18:24 [logger.py:42] Received request cmpl-8a1b44dd4b6f413c956b91b935bf509b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:24 [async_llm.py:261] Added request cmpl-8a1b44dd4b6f413c956b91b935bf509b-0.
INFO 03-02 00:18:25 [logger.py:42] Received request cmpl-ab50892c643b47339f3c948ee4a9a3ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:25 [async_llm.py:261] Added request cmpl-ab50892c643b47339f3c948ee4a9a3ec-0.
INFO 03-02 00:18:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:18:26 [logger.py:42] Received request cmpl-0896018f38344c9983764915faafb9d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:26 [async_llm.py:261] Added request cmpl-0896018f38344c9983764915faafb9d7-0.
INFO 03-02 00:18:27 [logger.py:42] Received request cmpl-b0ac49a1938c4696adefbb7c91429abc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:27 [async_llm.py:261] Added request cmpl-b0ac49a1938c4696adefbb7c91429abc-0.
INFO 03-02 00:18:28 [logger.py:42] Received request cmpl-07760976d2f3447c8e98c4f7e7b78742-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:28 [async_llm.py:261] Added request cmpl-07760976d2f3447c8e98c4f7e7b78742-0.
INFO 03-02 00:18:29 [logger.py:42] Received request cmpl-2b8c97de0caa4179904aef056ddde178-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:29 [async_llm.py:261] Added request cmpl-2b8c97de0caa4179904aef056ddde178-0.
INFO 03-02 00:18:30 [logger.py:42] Received request cmpl-4bbd8607d1834016aecf9d15219cb07a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:30 [async_llm.py:261] Added request cmpl-4bbd8607d1834016aecf9d15219cb07a-0.
INFO 03-02 00:18:31 [logger.py:42] Received request cmpl-00e0dd99428340abbaf92c0f3a6648ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:31 [async_llm.py:261] Added request cmpl-00e0dd99428340abbaf92c0f3a6648ef-0.
INFO 03-02 00:18:32 [logger.py:42] Received request cmpl-81b53f7f74044f4591b83f90a4b05105-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:32 [async_llm.py:261] Added request cmpl-81b53f7f74044f4591b83f90a4b05105-0.
INFO 03-02 00:18:34 [logger.py:42] Received request cmpl-6c7913397d3c45628a4266c8e28007a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:34 [async_llm.py:261] Added request cmpl-6c7913397d3c45628a4266c8e28007a9-0.
INFO 03-02 00:18:35 [logger.py:42] Received request cmpl-26186a23b85c40a18709f5ef6fcd0920-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:35 [async_llm.py:261] Added request cmpl-26186a23b85c40a18709f5ef6fcd0920-0.
INFO 03-02 00:18:36 [logger.py:42] Received request cmpl-731aeba759dc4c92828f39d4b5748598-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:36 [async_llm.py:261] Added request cmpl-731aeba759dc4c92828f39d4b5748598-0.
INFO 03-02 00:18:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:18:37 [logger.py:42] Received request cmpl-cbc6b31e4ee443a8897d503f52898bfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:37 [async_llm.py:261] Added request cmpl-cbc6b31e4ee443a8897d503f52898bfa-0.
INFO 03-02 00:18:38 [logger.py:42] Received request cmpl-db5aa4c4f31b40e28715f0be476f996c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:38 [async_llm.py:261] Added request cmpl-db5aa4c4f31b40e28715f0be476f996c-0.
INFO 03-02 00:18:39 [logger.py:42] Received request cmpl-194a4ab0f12c409e978a413fdec00ca5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:39 [async_llm.py:261] Added request cmpl-194a4ab0f12c409e978a413fdec00ca5-0.
INFO 03-02 00:18:40 [logger.py:42] Received request cmpl-8542ea4117134950b6fe744637a782df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:40 [async_llm.py:261] Added request cmpl-8542ea4117134950b6fe744637a782df-0.
INFO 03-02 00:18:41 [logger.py:42] Received request cmpl-ce2e1820044a40f1a6c2ec62a0df0ccf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:41 [async_llm.py:261] Added request cmpl-ce2e1820044a40f1a6c2ec62a0df0ccf-0.
INFO 03-02 00:18:42 [logger.py:42] Received request cmpl-fe0d7087c6db4c2b9921ec1f866def77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:42 [async_llm.py:261] Added request cmpl-fe0d7087c6db4c2b9921ec1f866def77-0.
INFO 03-02 00:18:43 [logger.py:42] Received request cmpl-d5422d8e9cf84b9191d2183cbd0890d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:43 [async_llm.py:261] Added request cmpl-d5422d8e9cf84b9191d2183cbd0890d8-0.
INFO 03-02 00:18:44 [logger.py:42] Received request cmpl-51eb6e60fc4441eb93631baa4fb755fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:44 [async_llm.py:261] Added request cmpl-51eb6e60fc4441eb93631baa4fb755fa-0.
INFO 03-02 00:18:45 [logger.py:42] Received request cmpl-3bd9abecf2b84cfa88785536462993a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:45 [async_llm.py:261] Added request cmpl-3bd9abecf2b84cfa88785536462993a0-0.
INFO 03-02 00:18:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:18:47 [logger.py:42] Received request cmpl-1dbdc521d07d4b67b433994876c2a8a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:47 [async_llm.py:261] Added request cmpl-1dbdc521d07d4b67b433994876c2a8a4-0.
INFO 03-02 00:18:48 [logger.py:42] Received request cmpl-d8437f8b0fde4ee6885d6b185c6cc41f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:48 [async_llm.py:261] Added request cmpl-d8437f8b0fde4ee6885d6b185c6cc41f-0.
INFO 03-02 00:18:49 [logger.py:42] Received request cmpl-0f2a04ba0952457bbd59d1553a8175b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:49 [async_llm.py:261] Added request cmpl-0f2a04ba0952457bbd59d1553a8175b0-0.
INFO 03-02 00:18:50 [logger.py:42] Received request cmpl-fa93a154493347d38525d75158034476-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:50 [async_llm.py:261] Added request cmpl-fa93a154493347d38525d75158034476-0.
INFO 03-02 00:18:51 [logger.py:42] Received request cmpl-5d24e7a39e754548a520f65c828f0467-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:51 [async_llm.py:261] Added request cmpl-5d24e7a39e754548a520f65c828f0467-0.
INFO 03-02 00:18:52 [logger.py:42] Received request cmpl-3e31bcfa231d42aa9937616a46ab80db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:52 [async_llm.py:261] Added request cmpl-3e31bcfa231d42aa9937616a46ab80db-0.
INFO 03-02 00:18:53 [logger.py:42] Received request cmpl-55bfad74049f41fb8e35debdfc343854-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:53 [async_llm.py:261] Added request cmpl-55bfad74049f41fb8e35debdfc343854-0.
INFO 03-02 00:18:54 [logger.py:42] Received request cmpl-35a112a254194e00a4780a3a96933300-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:54 [async_llm.py:261] Added request cmpl-35a112a254194e00a4780a3a96933300-0.
INFO 03-02 00:18:55 [logger.py:42] Received request cmpl-fe001d2112ef4071b86b956c6f7d4b64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:55 [async_llm.py:261] Added request cmpl-fe001d2112ef4071b86b956c6f7d4b64-0.
INFO 03-02 00:18:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:18:56 [logger.py:42] Received request cmpl-0b4ddf66e00c48718df284504fe52ce2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:56 [async_llm.py:261] Added request cmpl-0b4ddf66e00c48718df284504fe52ce2-0.
INFO 03-02 00:18:57 [logger.py:42] Received request cmpl-58d3166eb23d4e0db93bb4453d9840f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:57 [async_llm.py:261] Added request cmpl-58d3166eb23d4e0db93bb4453d9840f8-0.
INFO 03-02 00:18:58 [logger.py:42] Received request cmpl-cc224e15df934a0bad09c39fc275225c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:58 [async_llm.py:261] Added request cmpl-cc224e15df934a0bad09c39fc275225c-0.
INFO 03-02 00:19:00 [logger.py:42] Received request cmpl-a402816014ff4006952c79c7d4d8d127-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:00 [async_llm.py:261] Added request cmpl-a402816014ff4006952c79c7d4d8d127-0.
INFO 03-02 00:19:01 [logger.py:42] Received request cmpl-31f45c5135be458599ddb86b28ede5df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:01 [async_llm.py:261] Added request cmpl-31f45c5135be458599ddb86b28ede5df-0.
INFO 03-02 00:19:02 [logger.py:42] Received request cmpl-b3a6b15baa3a4a83b0559e55e2b41192-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:02 [async_llm.py:261] Added request cmpl-b3a6b15baa3a4a83b0559e55e2b41192-0.
INFO 03-02 00:19:03 [logger.py:42] Received request cmpl-ef0ba180d2d64ce59344b9620c7dceb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:03 [async_llm.py:261] Added request cmpl-ef0ba180d2d64ce59344b9620c7dceb3-0.
INFO 03-02 00:19:04 [logger.py:42] Received request cmpl-daffe83fa5b840b1b15ac768502a7122-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:04 [async_llm.py:261] Added request cmpl-daffe83fa5b840b1b15ac768502a7122-0.
INFO 03-02 00:19:05 [logger.py:42] Received request cmpl-0f99baf599c049a59abd0400acf0c7fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:05 [async_llm.py:261] Added request cmpl-0f99baf599c049a59abd0400acf0c7fd-0.
INFO 03-02 00:19:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:19:06 [logger.py:42] Received request cmpl-70b856840d9f4b97bf92d5407fcafd4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:06 [async_llm.py:261] Added request cmpl-70b856840d9f4b97bf92d5407fcafd4d-0.
INFO 03-02 00:19:07 [logger.py:42] Received request cmpl-4b5b2c702bbd4223a2f897e53134c5fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:07 [async_llm.py:261] Added request cmpl-4b5b2c702bbd4223a2f897e53134c5fc-0.
INFO 03-02 00:19:08 [logger.py:42] Received request cmpl-98320b79bc3a4baca28e51b5a15ac42c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:08 [async_llm.py:261] Added request cmpl-98320b79bc3a4baca28e51b5a15ac42c-0.
INFO 03-02 00:19:09 [logger.py:42] Received request cmpl-04ed7806ad2e4bdcace65c441061eba2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:09 [async_llm.py:261] Added request cmpl-04ed7806ad2e4bdcace65c441061eba2-0.
INFO 03-02 00:19:10 [logger.py:42] Received request cmpl-8d92574785474f7ea6c1dc0dadefe12f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:10 [async_llm.py:261] Added request cmpl-8d92574785474f7ea6c1dc0dadefe12f-0.
INFO 03-02 00:19:11 [logger.py:42] Received request cmpl-aa152b33576d4146a8efa647abe7b64b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:11 [async_llm.py:261] Added request cmpl-aa152b33576d4146a8efa647abe7b64b-0.
INFO 03-02 00:19:13 [logger.py:42] Received request cmpl-958192b0c06c4ec2bcd3efc53d972d27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:13 [async_llm.py:261] Added request cmpl-958192b0c06c4ec2bcd3efc53d972d27-0.
INFO 03-02 00:19:14 [logger.py:42] Received request cmpl-d4d947ab79474083bbee531be6a37e35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:14 [async_llm.py:261] Added request cmpl-d4d947ab79474083bbee531be6a37e35-0.
INFO 03-02 00:19:15 [logger.py:42] Received request cmpl-e98a0e629df54fe3a62151cd5adbfb73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:15 [async_llm.py:261] Added request cmpl-e98a0e629df54fe3a62151cd5adbfb73-0.
INFO 03-02 00:19:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:19:16 [logger.py:42] Received request cmpl-a15fcdf575d54a3488170c355289fb99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:16 [async_llm.py:261] Added request cmpl-a15fcdf575d54a3488170c355289fb99-0.
INFO 03-02 00:19:17 [logger.py:42] Received request cmpl-b7e981db0c9044bc830d1c21d39f0f0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:17 [async_llm.py:261] Added request cmpl-b7e981db0c9044bc830d1c21d39f0f0f-0.
INFO 03-02 00:19:18 [logger.py:42] Received request cmpl-e40faf2a3ef14fc68f8b119c578becc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:18 [async_llm.py:261] Added request cmpl-e40faf2a3ef14fc68f8b119c578becc1-0.
INFO 03-02 00:19:19 [logger.py:42] Received request cmpl-23b1098546734e2bb4bd232aceafe15d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:19 [async_llm.py:261] Added request cmpl-23b1098546734e2bb4bd232aceafe15d-0.
INFO 03-02 00:19:20 [logger.py:42] Received request cmpl-4c4fd9ec92634e569314dc9662e7d338-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:20 [async_llm.py:261] Added request cmpl-4c4fd9ec92634e569314dc9662e7d338-0.
INFO 03-02 00:19:21 [logger.py:42] Received request cmpl-06625b95bef941a3ac6170a5ac533fff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:21 [async_llm.py:261] Added request cmpl-06625b95bef941a3ac6170a5ac533fff-0.
INFO 03-02 00:19:22 [logger.py:42] Received request cmpl-cfc9c576ccea42e09d959a7da46f0a10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:22 [async_llm.py:261] Added request cmpl-cfc9c576ccea42e09d959a7da46f0a10-0.
INFO 03-02 00:19:23 [logger.py:42] Received request cmpl-39227dbfe79b4573903928b132e382c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:23 [async_llm.py:261] Added request cmpl-39227dbfe79b4573903928b132e382c0-0.
INFO 03-02 00:19:24 [logger.py:42] Received request cmpl-f564945aac1045b7b41de9aca01cc700-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:24 [async_llm.py:261] Added request cmpl-f564945aac1045b7b41de9aca01cc700-0.
INFO 03-02 00:19:26 [logger.py:42] Received request cmpl-9cf0f689e1e8457ea3a7f554c81d9778-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:26 [async_llm.py:261] Added request cmpl-9cf0f689e1e8457ea3a7f554c81d9778-0.
INFO 03-02 00:19:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:19:27 [logger.py:42] Received request cmpl-ab90a8feaef147999a6adee1147e667f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:27 [async_llm.py:261] Added request cmpl-ab90a8feaef147999a6adee1147e667f-0.
INFO 03-02 00:19:28 [logger.py:42] Received request cmpl-189ea392faa14f8d896870d8a0fedcc2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:28 [async_llm.py:261] Added request cmpl-189ea392faa14f8d896870d8a0fedcc2-0.
INFO 03-02 00:19:29 [logger.py:42] Received request cmpl-b47e64a8483648b581f8f47ac987bba4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:29 [async_llm.py:261] Added request cmpl-b47e64a8483648b581f8f47ac987bba4-0.
INFO 03-02 00:19:30 [logger.py:42] Received request cmpl-54940c7d358b4e19b5cef63a2d51fe7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:30 [async_llm.py:261] Added request cmpl-54940c7d358b4e19b5cef63a2d51fe7a-0.
INFO 03-02 00:19:31 [logger.py:42] Received request cmpl-bc06e0241a4c41909fcbc4d6465468d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:31 [async_llm.py:261] Added request cmpl-bc06e0241a4c41909fcbc4d6465468d7-0.
INFO 03-02 00:19:32 [logger.py:42] Received request cmpl-bab4d4fbb33d47b995a0d5fd159c72e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:32 [async_llm.py:261] Added request cmpl-bab4d4fbb33d47b995a0d5fd159c72e5-0.
INFO 03-02 00:19:33 [logger.py:42] Received request cmpl-37554cb00d8d4495b98a196deb8d2786-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:33 [async_llm.py:261] Added request cmpl-37554cb00d8d4495b98a196deb8d2786-0.
INFO 03-02 00:19:34 [logger.py:42] Received request cmpl-eeb478ec587f44bfa6b7049d8eb38809-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:34 [async_llm.py:261] Added request cmpl-eeb478ec587f44bfa6b7049d8eb38809-0.
INFO 03-02 00:19:35 [logger.py:42] Received request cmpl-f3c29e25f44d49449333790f083f91e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:35 [async_llm.py:261] Added request cmpl-f3c29e25f44d49449333790f083f91e5-0.
INFO 03-02 00:19:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:19:36 [logger.py:42] Received request cmpl-f6ba9df4185542a9864ea42e54db35f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:36 [async_llm.py:261] Added request cmpl-f6ba9df4185542a9864ea42e54db35f7-0.
INFO 03-02 00:19:37 [logger.py:42] Received request cmpl-e00db583ce1f4e798b126cc454d9bb70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:37 [async_llm.py:261] Added request cmpl-e00db583ce1f4e798b126cc454d9bb70-0.
INFO 03-02 00:19:39 [logger.py:42] Received request cmpl-390542406b8d457daa27a0ebfe54d654-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:39 [async_llm.py:261] Added request cmpl-390542406b8d457daa27a0ebfe54d654-0.
INFO 03-02 00:19:40 [logger.py:42] Received request cmpl-2bf0e0d8a69941999dcd8a76d7e0fb48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:40 [async_llm.py:261] Added request cmpl-2bf0e0d8a69941999dcd8a76d7e0fb48-0.
INFO 03-02 00:19:41 [logger.py:42] Received request cmpl-4242561d91a9457891f173f04bfa2f3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:41 [async_llm.py:261] Added request cmpl-4242561d91a9457891f173f04bfa2f3b-0.
INFO 03-02 00:19:42 [logger.py:42] Received request cmpl-5d28a762b77347a0988db2426ad71416-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:42 [async_llm.py:261] Added request cmpl-5d28a762b77347a0988db2426ad71416-0.
INFO 03-02 00:19:43 [logger.py:42] Received request cmpl-bd962042beb94489b4e2ebb1a6c707cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:43 [async_llm.py:261] Added request cmpl-bd962042beb94489b4e2ebb1a6c707cf-0.
INFO 03-02 00:19:44 [logger.py:42] Received request cmpl-c2135d301cc94b0bbb186f2d4176a960-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:44 [async_llm.py:261] Added request cmpl-c2135d301cc94b0bbb186f2d4176a960-0.
INFO 03-02 00:19:45 [logger.py:42] Received request cmpl-59fcde0b39de48cd84c1228c5134840d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:45 [async_llm.py:261] Added request cmpl-59fcde0b39de48cd84c1228c5134840d-0.
INFO 03-02 00:19:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:19:46 [logger.py:42] Received request cmpl-0ed504a90a514da2ba2dd9ec5b8e14fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:46 [async_llm.py:261] Added request cmpl-0ed504a90a514da2ba2dd9ec5b8e14fa-0.
INFO 03-02 00:19:47 [logger.py:42] Received request cmpl-8d59a1dc6e534e538811162317d9e671-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:47 [async_llm.py:261] Added request cmpl-8d59a1dc6e534e538811162317d9e671-0.
INFO 03-02 00:19:48 [logger.py:42] Received request cmpl-5b70c26720cf4b3f83df49dc05422cbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:48 [async_llm.py:261] Added request cmpl-5b70c26720cf4b3f83df49dc05422cbe-0.
INFO 03-02 00:19:49 [logger.py:42] Received request cmpl-38da480b590947b58f135319407389a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:49 [async_llm.py:261] Added request cmpl-38da480b590947b58f135319407389a9-0.
INFO 03-02 00:19:51 [logger.py:42] Received request cmpl-03b42310cc9d47a1ae89ffafb5444043-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:51 [async_llm.py:261] Added request cmpl-03b42310cc9d47a1ae89ffafb5444043-0.
INFO 03-02 00:19:52 [logger.py:42] Received request cmpl-9aeb4d536b7f46de961e166a25d6a4af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:52 [async_llm.py:261] Added request cmpl-9aeb4d536b7f46de961e166a25d6a4af-0.
INFO 03-02 00:19:53 [logger.py:42] Received request cmpl-8d72e02b88bc42a784fc242934cf44e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:53 [async_llm.py:261] Added request cmpl-8d72e02b88bc42a784fc242934cf44e6-0.
INFO 03-02 00:19:54 [logger.py:42] Received request cmpl-24a31ce5b995455582d51cc3de1e35f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:54 [async_llm.py:261] Added request cmpl-24a31ce5b995455582d51cc3de1e35f0-0.
INFO 03-02 00:19:55 [logger.py:42] Received request cmpl-79fea417bd7148a6937efff56d036d47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:55 [async_llm.py:261] Added request cmpl-79fea417bd7148a6937efff56d036d47-0.
INFO 03-02 00:19:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:19:56 [logger.py:42] Received request cmpl-7a4a4ba08bd44a1a814f755e68d5f5bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:56 [async_llm.py:261] Added request cmpl-7a4a4ba08bd44a1a814f755e68d5f5bb-0.
INFO 03-02 00:19:57 [logger.py:42] Received request cmpl-d96aa1ccd1be43a2a2b2eb622f3bea23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:57 [async_llm.py:261] Added request cmpl-d96aa1ccd1be43a2a2b2eb622f3bea23-0.
INFO 03-02 00:19:58 [logger.py:42] Received request cmpl-d0a954f5f3f54cfa935350856ee0f54e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:58 [async_llm.py:261] Added request cmpl-d0a954f5f3f54cfa935350856ee0f54e-0.
INFO 03-02 00:19:59 [logger.py:42] Received request cmpl-d39835bee87648bb9ef09788d0ce306d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:59 [async_llm.py:261] Added request cmpl-d39835bee87648bb9ef09788d0ce306d-0.
INFO 03-02 00:20:00 [logger.py:42] Received request cmpl-187d53e4309e4b3ca7eb0c050660326c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:00 [async_llm.py:261] Added request cmpl-187d53e4309e4b3ca7eb0c050660326c-0.
INFO 03-02 00:20:01 [logger.py:42] Received request cmpl-ca07b8275514431aad828b7e7b2ae51e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:01 [async_llm.py:261] Added request cmpl-ca07b8275514431aad828b7e7b2ae51e-0.
INFO 03-02 00:20:02 [logger.py:42] Received request cmpl-fa67440fdd124ed486fb80d69cd350f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:02 [async_llm.py:261] Added request cmpl-fa67440fdd124ed486fb80d69cd350f1-0.
INFO 03-02 00:20:04 [logger.py:42] Received request cmpl-94856649851042648bdd061986e34599-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:04 [async_llm.py:261] Added request cmpl-94856649851042648bdd061986e34599-0.
INFO 03-02 00:20:05 [logger.py:42] Received request cmpl-e4c12f41879f46e5afe64da7a6d00794-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:05 [async_llm.py:261] Added request cmpl-e4c12f41879f46e5afe64da7a6d00794-0.
INFO 03-02 00:20:06 [logger.py:42] Received request cmpl-ab4a02bf010d43c6bdf4cceb01d7003d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:06 [async_llm.py:261] Added request cmpl-ab4a02bf010d43c6bdf4cceb01d7003d-0.
INFO 03-02 00:20:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:20:07 [logger.py:42] Received request cmpl-266fe4efb16346569dd6b32981d12460-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:07 [async_llm.py:261] Added request cmpl-266fe4efb16346569dd6b32981d12460-0.
INFO 03-02 00:20:08 [logger.py:42] Received request cmpl-ff078113c0a5475d9e5220f300025f87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:08 [async_llm.py:261] Added request cmpl-ff078113c0a5475d9e5220f300025f87-0.
INFO 03-02 00:20:09 [logger.py:42] Received request cmpl-7a5f8fb86c184362b0b5a180b94d681a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:09 [async_llm.py:261] Added request cmpl-7a5f8fb86c184362b0b5a180b94d681a-0.
INFO 03-02 00:20:10 [logger.py:42] Received request cmpl-0e9e1b66c5b64aef9d2fb57ea9f55ee7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:10 [async_llm.py:261] Added request cmpl-0e9e1b66c5b64aef9d2fb57ea9f55ee7-0.
INFO 03-02 00:20:11 [logger.py:42] Received request cmpl-8b16dee3f79641c8b8867cf1e4eec68f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:11 [async_llm.py:261] Added request cmpl-8b16dee3f79641c8b8867cf1e4eec68f-0.
INFO 03-02 00:20:12 [logger.py:42] Received request cmpl-17d3ad46bca84417b78e68e9d238a99a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:12 [async_llm.py:261] Added request cmpl-17d3ad46bca84417b78e68e9d238a99a-0.
INFO 03-02 00:20:13 [logger.py:42] Received request cmpl-a0a36c80f71144bf96b6bd2241e7fe9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:13 [async_llm.py:261] Added request cmpl-a0a36c80f71144bf96b6bd2241e7fe9e-0.
INFO 03-02 00:20:14 [logger.py:42] Received request cmpl-72dc56e93d284c93a5d46222f459285c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:14 [async_llm.py:261] Added request cmpl-72dc56e93d284c93a5d46222f459285c-0.
INFO 03-02 00:20:15 [logger.py:42] Received request cmpl-def6ab1e22c1444abae9cf937ec4af85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:15 [async_llm.py:261] Added request cmpl-def6ab1e22c1444abae9cf937ec4af85-0.
INFO 03-02 00:20:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:20:17 [logger.py:42] Received request cmpl-ee27b7939408484c895b43a4106be415-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:17 [async_llm.py:261] Added request cmpl-ee27b7939408484c895b43a4106be415-0.
INFO 03-02 00:20:18 [logger.py:42] Received request cmpl-da2d63d06bc04d27968c80da4136656e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:18 [async_llm.py:261] Added request cmpl-da2d63d06bc04d27968c80da4136656e-0.
INFO 03-02 00:20:19 [logger.py:42] Received request cmpl-fcb60e0d39e94ce996fa24c71d34ad50-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:19 [async_llm.py:261] Added request cmpl-fcb60e0d39e94ce996fa24c71d34ad50-0.
INFO 03-02 00:20:20 [logger.py:42] Received request cmpl-f9d7537a0e124d46853b6981d3e0ba1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:20 [async_llm.py:261] Added request cmpl-f9d7537a0e124d46853b6981d3e0ba1b-0.
INFO 03-02 00:20:21 [logger.py:42] Received request cmpl-feb257193edd4f41b86de0389c21d077-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:21 [async_llm.py:261] Added request cmpl-feb257193edd4f41b86de0389c21d077-0.
INFO 03-02 00:20:22 [logger.py:42] Received request cmpl-252508c80d1c4ceeaa7c5c335149ce65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:22 [async_llm.py:261] Added request cmpl-252508c80d1c4ceeaa7c5c335149ce65-0.
INFO 03-02 00:20:23 [logger.py:42] Received request cmpl-4e150f816626462e9f946a156b9c4a08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:23 [async_llm.py:261] Added request cmpl-4e150f816626462e9f946a156b9c4a08-0.
INFO 03-02 00:20:24 [logger.py:42] Received request cmpl-a8e2aafa94034f7892e33894073e939b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:24 [async_llm.py:261] Added request cmpl-a8e2aafa94034f7892e33894073e939b-0.
INFO 03-02 00:20:25 [logger.py:42] Received request cmpl-1317a850eabb4acda6608521d86d8876-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:25 [async_llm.py:261] Added request cmpl-1317a850eabb4acda6608521d86d8876-0.
INFO 03-02 00:20:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:20:26 [logger.py:42] Received request cmpl-61f72f3191e64a709d0f8f30d99cab26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:26 [async_llm.py:261] Added request cmpl-61f72f3191e64a709d0f8f30d99cab26-0.
INFO 03-02 00:20:27 [logger.py:42] Received request cmpl-d171f6932e9c4ebcad17d6f6fe7c5b1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:27 [async_llm.py:261] Added request cmpl-d171f6932e9c4ebcad17d6f6fe7c5b1a-0.
INFO 03-02 00:20:28 [logger.py:42] Received request cmpl-40451d9457614799b9b9106be3846ac0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:28 [async_llm.py:261] Added request cmpl-40451d9457614799b9b9106be3846ac0-0.
INFO 03-02 00:20:30 [logger.py:42] Received request cmpl-3aa4c68e6dbe403a861712e99364c707-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:30 [async_llm.py:261] Added request cmpl-3aa4c68e6dbe403a861712e99364c707-0.
INFO 03-02 00:20:31 [logger.py:42] Received request cmpl-1aa47614060747d5b42a17b641c8e08b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:31 [async_llm.py:261] Added request cmpl-1aa47614060747d5b42a17b641c8e08b-0.
INFO 03-02 00:20:32 [logger.py:42] Received request cmpl-832bea080e64442faf69ebe3ef7a9d96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:32 [async_llm.py:261] Added request cmpl-832bea080e64442faf69ebe3ef7a9d96-0.
INFO 03-02 00:20:33 [logger.py:42] Received request cmpl-7f634e22a1764d4eb4516734b14f31ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:33 [async_llm.py:261] Added request cmpl-7f634e22a1764d4eb4516734b14f31ee-0.
INFO 03-02 00:20:34 [logger.py:42] Received request cmpl-bd6ee48b5db34490b680e02c0d196191-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:34 [async_llm.py:261] Added request cmpl-bd6ee48b5db34490b680e02c0d196191-0.
INFO 03-02 00:20:35 [logger.py:42] Received request cmpl-eeed07d746a141e889aafba9b02db52c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:35 [async_llm.py:261] Added request cmpl-eeed07d746a141e889aafba9b02db52c-0.
INFO 03-02 00:20:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:20:36 [logger.py:42] Received request cmpl-9253d99af72445658a4a5d826a9d0f1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:36 [async_llm.py:261] Added request cmpl-9253d99af72445658a4a5d826a9d0f1e-0.
INFO 03-02 00:20:37 [logger.py:42] Received request cmpl-87602fdd7beb44bb809d4fb19c80001d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:37 [async_llm.py:261] Added request cmpl-87602fdd7beb44bb809d4fb19c80001d-0.
INFO 03-02 00:20:38 [logger.py:42] Received request cmpl-fb7b49325f5a4b2597e1fa0f90b6c219-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:38 [async_llm.py:261] Added request cmpl-fb7b49325f5a4b2597e1fa0f90b6c219-0.
INFO 03-02 00:20:39 [logger.py:42] Received request cmpl-c980e418b81e49c5b3865cc16400ca70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:39 [async_llm.py:261] Added request cmpl-c980e418b81e49c5b3865cc16400ca70-0.
INFO 03-02 00:20:40 [logger.py:42] Received request cmpl-316077a381f545aab99182175a252280-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:40 [async_llm.py:261] Added request cmpl-316077a381f545aab99182175a252280-0.
INFO 03-02 00:20:41 [logger.py:42] Received request cmpl-065ccbd3928147d3a6d14cca13451c65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:41 [async_llm.py:261] Added request cmpl-065ccbd3928147d3a6d14cca13451c65-0.
INFO 03-02 00:20:43 [logger.py:42] Received request cmpl-ebcd9a06a1ed435fab13143dd867cbf0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:43 [async_llm.py:261] Added request cmpl-ebcd9a06a1ed435fab13143dd867cbf0-0.
INFO 03-02 00:20:44 [logger.py:42] Received request cmpl-9e09ec0d6fd04fc2a030e6acbb93a582-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:44 [async_llm.py:261] Added request cmpl-9e09ec0d6fd04fc2a030e6acbb93a582-0.
INFO 03-02 00:20:45 [logger.py:42] Received request cmpl-57ec5d90cd0c46de8b802daa4b243eae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:45 [async_llm.py:261] Added request cmpl-57ec5d90cd0c46de8b802daa4b243eae-0.
INFO 03-02 00:20:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:20:46 [logger.py:42] Received request cmpl-e0cd73181f934fb6a195c49e4e258ed7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:46 [async_llm.py:261] Added request cmpl-e0cd73181f934fb6a195c49e4e258ed7-0.
INFO 03-02 00:20:47 [logger.py:42] Received request cmpl-3e0175ffb2874a8286cc3a001397be18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:47 [async_llm.py:261] Added request cmpl-3e0175ffb2874a8286cc3a001397be18-0.
INFO 03-02 00:20:48 [logger.py:42] Received request cmpl-3c089134acf446b4b601f3c2dae4ff2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:48 [async_llm.py:261] Added request cmpl-3c089134acf446b4b601f3c2dae4ff2b-0.
INFO 03-02 00:20:49 [logger.py:42] Received request cmpl-668e98289a5f42879a8683c78ea37d16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:49 [async_llm.py:261] Added request cmpl-668e98289a5f42879a8683c78ea37d16-0.
INFO 03-02 00:20:50 [logger.py:42] Received request cmpl-7ebe95710c4e4bedace9b72c536f9c9f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:50 [async_llm.py:261] Added request cmpl-7ebe95710c4e4bedace9b72c536f9c9f-0.
INFO 03-02 00:20:51 [logger.py:42] Received request cmpl-d167971022e1433e9c276e7e175c42c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:51 [async_llm.py:261] Added request cmpl-d167971022e1433e9c276e7e175c42c8-0.
INFO 03-02 00:20:52 [logger.py:42] Received request cmpl-9071ce5698d34a9d85cace551eed0d03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:52 [async_llm.py:261] Added request cmpl-9071ce5698d34a9d85cace551eed0d03-0.
INFO 03-02 00:20:53 [logger.py:42] Received request cmpl-f58948f1b2bd4304b597753fa4dd9b0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:53 [async_llm.py:261] Added request cmpl-f58948f1b2bd4304b597753fa4dd9b0b-0.
INFO 03-02 00:20:54 [logger.py:42] Received request cmpl-4354d275e0874d478f1bd2d3e97ce158-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:54 [async_llm.py:261] Added request cmpl-4354d275e0874d478f1bd2d3e97ce158-0.
INFO 03-02 00:20:56 [logger.py:42] Received request cmpl-70d0e3de09e84edaa0a69571ade74e80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:56 [async_llm.py:261] Added request cmpl-70d0e3de09e84edaa0a69571ade74e80-0.
INFO 03-02 00:20:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:20:57 [logger.py:42] Received request cmpl-165cd8df04e642ab9840ea5bb3963c9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:57 [async_llm.py:261] Added request cmpl-165cd8df04e642ab9840ea5bb3963c9e-0.
INFO 03-02 00:20:58 [logger.py:42] Received request cmpl-c483f61801784b408d60af4a9304948d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:58 [async_llm.py:261] Added request cmpl-c483f61801784b408d60af4a9304948d-0.
INFO 03-02 00:20:59 [logger.py:42] Received request cmpl-602c6c2e0ad9446e886e76e961868922-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:59 [async_llm.py:261] Added request cmpl-602c6c2e0ad9446e886e76e961868922-0.
INFO 03-02 00:21:00 [logger.py:42] Received request cmpl-35c4b561e9d14db795838dd8f872b7a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:00 [async_llm.py:261] Added request cmpl-35c4b561e9d14db795838dd8f872b7a4-0.
INFO 03-02 00:21:01 [logger.py:42] Received request cmpl-566ff0b190a8489f8ce4177a3359ea5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:01 [async_llm.py:261] Added request cmpl-566ff0b190a8489f8ce4177a3359ea5a-0.
INFO 03-02 00:21:02 [logger.py:42] Received request cmpl-514ae191bdb4464d9bc058687e63804c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:02 [async_llm.py:261] Added request cmpl-514ae191bdb4464d9bc058687e63804c-0.
INFO 03-02 00:21:03 [logger.py:42] Received request cmpl-28aa50ff7f4546d0951b6645fe01ea1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:03 [async_llm.py:261] Added request cmpl-28aa50ff7f4546d0951b6645fe01ea1d-0.
INFO 03-02 00:21:04 [logger.py:42] Received request cmpl-c235c4f9460540c6a3e085fa5b2553dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:04 [async_llm.py:261] Added request cmpl-c235c4f9460540c6a3e085fa5b2553dd-0.
INFO 03-02 00:21:05 [logger.py:42] Received request cmpl-0fb5457d974a4cf08b589209405c776a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:05 [async_llm.py:261] Added request cmpl-0fb5457d974a4cf08b589209405c776a-0.
INFO 03-02 00:21:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:21:06 [logger.py:42] Received request cmpl-b8ddca3360cc430fa2ea7f733ce23cb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:06 [async_llm.py:261] Added request cmpl-b8ddca3360cc430fa2ea7f733ce23cb7-0.
INFO 03-02 00:21:07 [logger.py:42] Received request cmpl-74043ef01a7b4415afe77a025f6d76cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:07 [async_llm.py:261] Added request cmpl-74043ef01a7b4415afe77a025f6d76cd-0.
INFO 03-02 00:21:09 [logger.py:42] Received request cmpl-b70f5db068014390a3d7a5e01465e35f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:09 [async_llm.py:261] Added request cmpl-b70f5db068014390a3d7a5e01465e35f-0.
INFO 03-02 00:21:10 [logger.py:42] Received request cmpl-becfca433bfd491ea7a9be3caa574546-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:10 [async_llm.py:261] Added request cmpl-becfca433bfd491ea7a9be3caa574546-0.
INFO 03-02 00:21:11 [logger.py:42] Received request cmpl-f4e117a89f394c2a8c0122ae5f84e059-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:11 [async_llm.py:261] Added request cmpl-f4e117a89f394c2a8c0122ae5f84e059-0.
INFO 03-02 00:21:12 [logger.py:42] Received request cmpl-932fba2ececa4f6fb17c2180641bffaa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:12 [async_llm.py:261] Added request cmpl-932fba2ececa4f6fb17c2180641bffaa-0.
INFO 03-02 00:21:13 [logger.py:42] Received request cmpl-1a7fd70f5b55448eba11099d0d0bf4f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:13 [async_llm.py:261] Added request cmpl-1a7fd70f5b55448eba11099d0d0bf4f3-0.
INFO 03-02 00:21:14 [logger.py:42] Received request cmpl-ab3a519f4fb74841985a92257473b9c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:14 [async_llm.py:261] Added request cmpl-ab3a519f4fb74841985a92257473b9c9-0.
INFO 03-02 00:21:15 [logger.py:42] Received request cmpl-7e961b65c04e4642a8b9248d7b0c5122-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:15 [async_llm.py:261] Added request cmpl-7e961b65c04e4642a8b9248d7b0c5122-0.
INFO 03-02 00:21:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:21:16 [logger.py:42] Received request cmpl-7b57d35b03524b37bb1b4b734cc4889e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:16 [async_llm.py:261] Added request cmpl-7b57d35b03524b37bb1b4b734cc4889e-0.
INFO 03-02 00:21:17 [logger.py:42] Received request cmpl-60d51848bcb8493897ec5506eb741ab3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:17 [async_llm.py:261] Added request cmpl-60d51848bcb8493897ec5506eb741ab3-0.
INFO 03-02 00:21:18 [logger.py:42] Received request cmpl-5ae86dc665bf49d18a49e047c2d349aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:18 [async_llm.py:261] Added request cmpl-5ae86dc665bf49d18a49e047c2d349aa-0.
INFO 03-02 00:21:19 [logger.py:42] Received request cmpl-5704f7f5445640c684d54042c8dda31e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:19 [async_llm.py:261] Added request cmpl-5704f7f5445640c684d54042c8dda31e-0.
INFO 03-02 00:21:20 [logger.py:42] Received request cmpl-024e6462fede445091b5fb5e38ab79a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:20 [async_llm.py:261] Added request cmpl-024e6462fede445091b5fb5e38ab79a9-0.
INFO 03-02 00:21:22 [logger.py:42] Received request cmpl-10028e5bea8b422b9cad83a82df4fd34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:22 [async_llm.py:261] Added request cmpl-10028e5bea8b422b9cad83a82df4fd34-0.
INFO 03-02 00:21:23 [logger.py:42] Received request cmpl-ed09403e86cf4d909b11fc2d44fc7a3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:23 [async_llm.py:261] Added request cmpl-ed09403e86cf4d909b11fc2d44fc7a3d-0.
INFO 03-02 00:21:24 [logger.py:42] Received request cmpl-81956458000044f5a6002e6a0115ecac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:24 [async_llm.py:261] Added request cmpl-81956458000044f5a6002e6a0115ecac-0.
INFO 03-02 00:21:25 [logger.py:42] Received request cmpl-c5edb7bb6c5e4bf0bb96a852ae33c7c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:25 [async_llm.py:261] Added request cmpl-c5edb7bb6c5e4bf0bb96a852ae33c7c0-0.
INFO 03-02 00:21:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:21:26 [logger.py:42] Received request cmpl-49a9dc6a683542e299743f9c4d112d5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:26 [async_llm.py:261] Added request cmpl-49a9dc6a683542e299743f9c4d112d5d-0.
INFO 03-02 00:21:27 [logger.py:42] Received request cmpl-5da5cff0c59d4c6fb505c409a1b427a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:27 [async_llm.py:261] Added request cmpl-5da5cff0c59d4c6fb505c409a1b427a8-0.
INFO 03-02 00:21:28 [logger.py:42] Received request cmpl-0ae12a48afaa4a6aabaf10970071a9b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:28 [async_llm.py:261] Added request cmpl-0ae12a48afaa4a6aabaf10970071a9b1-0.
INFO 03-02 00:21:29 [logger.py:42] Received request cmpl-85634b437e68467f9f0efba777b2efd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:29 [async_llm.py:261] Added request cmpl-85634b437e68467f9f0efba777b2efd5-0.
INFO 03-02 00:21:30 [logger.py:42] Received request cmpl-2d740d0530d14bebbba57f4181ecf820-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:30 [async_llm.py:261] Added request cmpl-2d740d0530d14bebbba57f4181ecf820-0.
INFO 03-02 00:21:31 [logger.py:42] Received request cmpl-a282300ba42141b2a36c66d76a612795-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:31 [async_llm.py:261] Added request cmpl-a282300ba42141b2a36c66d76a612795-0.
INFO 03-02 00:21:32 [logger.py:42] Received request cmpl-03734265701345ce8ffaa0a42d5275f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:32 [async_llm.py:261] Added request cmpl-03734265701345ce8ffaa0a42d5275f6-0.
INFO 03-02 00:21:33 [logger.py:42] Received request cmpl-ae2fd634c5fe4405b945e836f361958d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:33 [async_llm.py:261] Added request cmpl-ae2fd634c5fe4405b945e836f361958d-0.
INFO 03-02 00:21:35 [logger.py:42] Received request cmpl-77912d1480f740098fd50a654af7f9cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:35 [async_llm.py:261] Added request cmpl-77912d1480f740098fd50a654af7f9cd-0.
INFO 03-02 00:21:36 [logger.py:42] Received request cmpl-e093494a777c47128f5a6184f790fb49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:36 [async_llm.py:261] Added request cmpl-e093494a777c47128f5a6184f790fb49-0.
INFO 03-02 00:21:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:21:37 [logger.py:42] Received request cmpl-129be58dc2564153b685fb9b340e791b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:37 [async_llm.py:261] Added request cmpl-129be58dc2564153b685fb9b340e791b-0.
INFO 03-02 00:21:38 [logger.py:42] Received request cmpl-30e4df1a364641939d95951c3c8ac95e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:38 [async_llm.py:261] Added request cmpl-30e4df1a364641939d95951c3c8ac95e-0.
INFO 03-02 00:21:39 [logger.py:42] Received request cmpl-681712faa59b426fb9c5ecd8b2befb07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:39 [async_llm.py:261] Added request cmpl-681712faa59b426fb9c5ecd8b2befb07-0.
INFO 03-02 00:21:40 [logger.py:42] Received request cmpl-df740935091c44e89eb05373f3aa7c7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:40 [async_llm.py:261] Added request cmpl-df740935091c44e89eb05373f3aa7c7a-0.
INFO 03-02 00:21:41 [logger.py:42] Received request cmpl-da11e28a855d48a3a33766afb3793bc6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:41 [async_llm.py:261] Added request cmpl-da11e28a855d48a3a33766afb3793bc6-0.
INFO 03-02 00:21:42 [logger.py:42] Received request cmpl-f08f45d53eff450e8db7c6276e1dde3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:42 [async_llm.py:261] Added request cmpl-f08f45d53eff450e8db7c6276e1dde3f-0.
INFO 03-02 00:21:43 [logger.py:42] Received request cmpl-509e755e1cc142bdbb98f5fdb27aedb1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:43 [async_llm.py:261] Added request cmpl-509e755e1cc142bdbb98f5fdb27aedb1-0.
INFO 03-02 00:21:44 [logger.py:42] Received request cmpl-8449246697834b9397e522e47eb3e615-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:44 [async_llm.py:261] Added request cmpl-8449246697834b9397e522e47eb3e615-0.
INFO 03-02 00:21:45 [logger.py:42] Received request cmpl-ab51742172924102b0a8e817107ba5ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:45 [async_llm.py:261] Added request cmpl-ab51742172924102b0a8e817107ba5ac-0.
INFO 03-02 00:21:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:21:46 [logger.py:42] Received request cmpl-950e18660b064e778128e837ff4c036b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:46 [async_llm.py:261] Added request cmpl-950e18660b064e778128e837ff4c036b-0.
INFO 03-02 00:21:48 [logger.py:42] Received request cmpl-2f944afb639e459da243f7c03aa893c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:48 [async_llm.py:261] Added request cmpl-2f944afb639e459da243f7c03aa893c5-0.
INFO 03-02 00:21:49 [logger.py:42] Received request cmpl-7fc38f79f302426c93298ab5e1403694-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:49 [async_llm.py:261] Added request cmpl-7fc38f79f302426c93298ab5e1403694-0.
INFO 03-02 00:21:50 [logger.py:42] Received request cmpl-11bba6fbcedb4b4abfb51884c3fad453-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:50 [async_llm.py:261] Added request cmpl-11bba6fbcedb4b4abfb51884c3fad453-0.
INFO 03-02 00:21:51 [logger.py:42] Received request cmpl-70d01594fb3c4c81b343c371912d12b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:51 [async_llm.py:261] Added request cmpl-70d01594fb3c4c81b343c371912d12b0-0.
INFO 03-02 00:21:52 [logger.py:42] Received request cmpl-1abbcac3b8384740950c2a077f1f0583-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:52 [async_llm.py:261] Added request cmpl-1abbcac3b8384740950c2a077f1f0583-0.
INFO 03-02 00:21:53 [logger.py:42] Received request cmpl-3a3a97b6bb2b4e17bbcd616c37c6a1d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:53 [async_llm.py:261] Added request cmpl-3a3a97b6bb2b4e17bbcd616c37c6a1d9-0.
INFO 03-02 00:21:54 [logger.py:42] Received request cmpl-8d5da955d68842d99bd99a0cddbee40a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:54 [async_llm.py:261] Added request cmpl-8d5da955d68842d99bd99a0cddbee40a-0.
INFO 03-02 00:21:55 [logger.py:42] Received request cmpl-ebbd304cd7b44b11a4d98b1475353e3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:55 [async_llm.py:261] Added request cmpl-ebbd304cd7b44b11a4d98b1475353e3f-0.
INFO 03-02 00:21:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:21:56 [logger.py:42] Received request cmpl-39f5c402c953496386bd8c9c6593df91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:56 [async_llm.py:261] Added request cmpl-39f5c402c953496386bd8c9c6593df91-0.
INFO:  1.2.3.4:123 - "POST /v1/completions HTTP/1.1" 404 Not Found
INFO 03-02 00:21:57 [logger.py:42] Received request cmpl-71e70fea340e4d169ac14bc21a71ae45-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:57 [async_llm.py:261] Added request cmpl-71e70fea340e4d169ac14bc21a71ae45-0.
INFO 03-02 00:21:58 [logger.py:42] Received request cmpl-913a25c0045c4b99871ebe18e90c796b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:58 [async_llm.py:261] Added request cmpl-913a25c0045c4b99871ebe18e90c796b-0.
INFO 03-02 00:21:59 [logger.py:42] Received request cmpl-8c835ac8f358456b855d051927b12e9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:59 [async_llm.py:261] Added request cmpl-8c835ac8f358456b855d051927b12e9c-0.
INFO 03-02 00:22:01 [logger.py:42] Received request cmpl-b4cf29eaf6c6426784297082b2f05f2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:01 [async_llm.py:261] Added request cmpl-b4cf29eaf6c6426784297082b2f05f2c-0.
INFO 03-02 00:22:02 [logger.py:42] Received request cmpl-7957b06cfb4640e2981bf9d4c7614f7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:02 [async_llm.py:261] Added request cmpl-7957b06cfb4640e2981bf9d4c7614f7f-0.
INFO 03-02 00:22:03 [logger.py:42] Received request cmpl-a09d427a85424a1e96af0d226bf1c81c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:03 [async_llm.py:261] Added request cmpl-a09d427a85424a1e96af0d226bf1c81c-0.
INFO 03-02 00:22:04 [logger.py:42] Received request cmpl-a897de7f15384175ac3bfa2eade0a8a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:04 [async_llm.py:261] Added request cmpl-a897de7f15384175ac3bfa2eade0a8a0-0.
INFO 03-02 00:22:05 [logger.py:42] Received request cmpl-bfde8919c44c4a7c9069d5758733e4c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:05 [async_llm.py:261] Added request cmpl-bfde8919c44c4a7c9069d5758733e4c8-0.
INFO 03-02 00:22:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:22:06 [logger.py:42] Received request cmpl-6ab14cc685fe4769a69e1c550b07d29c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:06 [async_llm.py:261] Added request cmpl-6ab14cc685fe4769a69e1c550b07d29c-0.
INFO 03-02 00:22:07 [logger.py:42] Received request cmpl-e1061466326542568da0ec04921b05f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:07 [async_llm.py:261] Added request cmpl-e1061466326542568da0ec04921b05f5-0.
INFO 03-02 00:22:08 [logger.py:42] Received request cmpl-0c7e145f38c6408e895926e0e2bb482e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:08 [async_llm.py:261] Added request cmpl-0c7e145f38c6408e895926e0e2bb482e-0.
INFO 03-02 00:22:09 [logger.py:42] Received request cmpl-f6bd41ada7a4447eb12d02cf82178594-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:09 [async_llm.py:261] Added request cmpl-f6bd41ada7a4447eb12d02cf82178594-0.
INFO 03-02 00:22:10 [logger.py:42] Received request cmpl-7595633d29ec4af0bc974fa516ad3ecb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:10 [async_llm.py:261] Added request cmpl-7595633d29ec4af0bc974fa516ad3ecb-0.
INFO 03-02 00:22:11 [logger.py:42] Received request cmpl-0580fc22cc6e426392a4fcdf10c6d09d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:11 [async_llm.py:261] Added request cmpl-0580fc22cc6e426392a4fcdf10c6d09d-0.
INFO 03-02 00:22:12 [logger.py:42] Received request cmpl-1644a7252ac34f6ab4fed96900702c9f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:12 [async_llm.py:261] Added request cmpl-1644a7252ac34f6ab4fed96900702c9f-0.
INFO 03-02 00:22:14 [logger.py:42] Received request cmpl-75bf26dca6d549fa9bc9d4aee7c6d9e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:14 [async_llm.py:261] Added request cmpl-75bf26dca6d549fa9bc9d4aee7c6d9e6-0.
INFO 03-02 00:22:15 [logger.py:42] Received request cmpl-0bb339af62d646c2a8e252bf569c2899-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:15 [async_llm.py:261] Added request cmpl-0bb339af62d646c2a8e252bf569c2899-0.
INFO 03-02 00:22:16 [logger.py:42] Received request cmpl-09ec303777f9487a89abeb5396929f95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:16 [async_llm.py:261] Added request cmpl-09ec303777f9487a89abeb5396929f95-0.
INFO 03-02 00:22:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:22:17 [logger.py:42] Received request cmpl-182b212cde4b41a08081a6f2db2db64f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:17 [async_llm.py:261] Added request cmpl-182b212cde4b41a08081a6f2db2db64f-0.
INFO 03-02 00:22:18 [logger.py:42] Received request cmpl-40522ba9871c404c816a28fc490d3d71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:18 [async_llm.py:261] Added request cmpl-40522ba9871c404c816a28fc490d3d71-0.
INFO 03-02 00:22:19 [logger.py:42] Received request cmpl-b92c7576dc644f429afe8c8f98d88703-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:19 [async_llm.py:261] Added request cmpl-b92c7576dc644f429afe8c8f98d88703-0.
INFO 03-02 00:22:20 [logger.py:42] Received request cmpl-a9b273d49a9a44b2bc43880860279a9a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:20 [async_llm.py:261] Added request cmpl-a9b273d49a9a44b2bc43880860279a9a-0.
INFO 03-02 00:22:21 [logger.py:42] Received request cmpl-cc710493801f46b8b556b883fe218a93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:21 [async_llm.py:261] Added request cmpl-cc710493801f46b8b556b883fe218a93-0.
INFO 03-02 00:22:22 [logger.py:42] Received request cmpl-14f4b2c6907b4b8b8ec241ca51690838-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:22 [async_llm.py:261] Added request cmpl-14f4b2c6907b4b8b8ec241ca51690838-0.
INFO 03-02 00:22:23 [logger.py:42] Received request cmpl-01140d4be5854e9f8555b6098b0847c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:23 [async_llm.py:261] Added request cmpl-01140d4be5854e9f8555b6098b0847c0-0.
INFO 03-02 00:22:24 [logger.py:42] Received request cmpl-d73a42fb017a4cc5afe554da34543c5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:24 [async_llm.py:261] Added request cmpl-d73a42fb017a4cc5afe554da34543c5d-0.
INFO 03-02 00:22:25 [logger.py:42] Received request cmpl-7c6e79a44e214185883c24fdbaf34fc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:25 [async_llm.py:261] Added request cmpl-7c6e79a44e214185883c24fdbaf34fc3-0.
INFO 03-02 00:22:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:22:27 [logger.py:42] Received request cmpl-66f8955a0669480f93ca6c1921bdf430-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:27 [async_llm.py:261] Added request cmpl-66f8955a0669480f93ca6c1921bdf430-0.
INFO 03-02 00:22:28 [logger.py:42] Received request cmpl-3b4063760e8b442e8fb42f4e958defc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:28 [async_llm.py:261] Added request cmpl-3b4063760e8b442e8fb42f4e958defc1-0.
INFO 03-02 00:22:29 [logger.py:42] Received request cmpl-86fe6648123f41f58d599aa464fc2fdd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:29 [async_llm.py:261] Added request cmpl-86fe6648123f41f58d599aa464fc2fdd-0.
INFO 03-02 00:22:30 [logger.py:42] Received request cmpl-13843669c48a444f996fd4613030650c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:30 [async_llm.py:261] Added request cmpl-13843669c48a444f996fd4613030650c-0.
INFO 03-02 00:22:31 [logger.py:42] Received request cmpl-1b1157f69af04fbbb0c06b42e2853f50-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:31 [async_llm.py:261] Added request cmpl-1b1157f69af04fbbb0c06b42e2853f50-0.
INFO 03-02 00:22:32 [logger.py:42] Received request cmpl-f2dfa54a417c490d92cff68ab6eb188c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:32 [async_llm.py:261] Added request cmpl-f2dfa54a417c490d92cff68ab6eb188c-0.
INFO 03-02 00:22:33 [logger.py:42] Received request cmpl-4283f69ea53e429dacb7f9fd852d8b99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:33 [async_llm.py:261] Added request cmpl-4283f69ea53e429dacb7f9fd852d8b99-0.
INFO 03-02 00:22:34 [logger.py:42] Received request cmpl-1d42d17b383745459c2de26cc5b7c42a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:34 [async_llm.py:261] Added request cmpl-1d42d17b383745459c2de26cc5b7c42a-0.
INFO 03-02 00:22:35 [logger.py:42] Received request cmpl-d30d6713f1d74749a820167c054c35a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:35 [async_llm.py:261] Added request cmpl-d30d6713f1d74749a820167c054c35a9-0.
INFO 03-02 00:22:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:22:36 [logger.py:42] Received request cmpl-1c39cd7b77424bb998e091c9863eaf58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:36 [async_llm.py:261] Added request cmpl-1c39cd7b77424bb998e091c9863eaf58-0.
INFO 03-02 00:22:37 [logger.py:42] Received request cmpl-2e6bde026b214c49b97331367b6f8874-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:37 [async_llm.py:261] Added request cmpl-2e6bde026b214c49b97331367b6f8874-0.
INFO 03-02 00:22:39 [logger.py:42] Received request cmpl-2241f16046ee4932a287c3902994d934-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:39 [async_llm.py:261] Added request cmpl-2241f16046ee4932a287c3902994d934-0.
INFO 03-02 00:22:40 [logger.py:42] Received request cmpl-231757fb87f346be92714264a745a5a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:40 [async_llm.py:261] Added request cmpl-231757fb87f346be92714264a745a5a1-0.
INFO 03-02 00:22:41 [logger.py:42] Received request cmpl-81e84f5fbeb74c8e964e3f1e969506f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:41 [async_llm.py:261] Added request cmpl-81e84f5fbeb74c8e964e3f1e969506f4-0.
INFO 03-02 00:22:42 [logger.py:42] Received request cmpl-fd43d82300524d549aff0de0b042cd3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:42 [async_llm.py:261] Added request cmpl-fd43d82300524d549aff0de0b042cd3d-0.
INFO 03-02 00:22:43 [logger.py:42] Received request cmpl-b448c49627944f5fb9cec6786b69a6c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:43 [async_llm.py:261] Added request cmpl-b448c49627944f5fb9cec6786b69a6c7-0.
INFO 03-02 00:22:44 [logger.py:42] Received request cmpl-e61b57b7d0d14ad9ae72416ed65c243b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:44 [async_llm.py:261] Added request cmpl-e61b57b7d0d14ad9ae72416ed65c243b-0.
INFO 03-02 00:22:45 [logger.py:42] Received request cmpl-043bcf26f3bd4131904b1d84bef05351-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:45 [async_llm.py:261] Added request cmpl-043bcf26f3bd4131904b1d84bef05351-0.
INFO 03-02 00:22:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:22:46 [logger.py:42] Received request cmpl-d18e94f8bc304e2388c91450b75866f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:46 [async_llm.py:261] Added request cmpl-d18e94f8bc304e2388c91450b75866f8-0.
INFO 03-02 00:22:47 [logger.py:42] Received request cmpl-5399f5901f314ba3b19725d1d4cece95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:47 [async_llm.py:261] Added request cmpl-5399f5901f314ba3b19725d1d4cece95-0.
INFO 03-02 00:22:48 [logger.py:42] Received request cmpl-27563c21dfec4c4d82ae60b06a516715-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:48 [async_llm.py:261] Added request cmpl-27563c21dfec4c4d82ae60b06a516715-0.
INFO 03-02 00:22:49 [logger.py:42] Received request cmpl-dda34298472944fc8bbc36edddb4734e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:49 [async_llm.py:261] Added request cmpl-dda34298472944fc8bbc36edddb4734e-0.
INFO 03-02 00:22:50 [logger.py:42] Received request cmpl-ab3bf087a4634be49cae84e922ebb7ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:50 [async_llm.py:261] Added request cmpl-ab3bf087a4634be49cae84e922ebb7ca-0.
INFO 03-02 00:22:52 [logger.py:42] Received request cmpl-2953e78b8e974f5e80e876b4e96aec18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:52 [async_llm.py:261] Added request cmpl-2953e78b8e974f5e80e876b4e96aec18-0.
INFO 03-02 00:22:53 [logger.py:42] Received request cmpl-77feab3c4efe43ec85f5b1b67f8590f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:53 [async_llm.py:261] Added request cmpl-77feab3c4efe43ec85f5b1b67f8590f6-0.
INFO 03-02 00:22:54 [logger.py:42] Received request cmpl-d205d4543a5846fdaca84108ff52b00f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:54 [async_llm.py:261] Added request cmpl-d205d4543a5846fdaca84108ff52b00f-0.
INFO 03-02 00:22:55 [logger.py:42] Received request cmpl-98bd88dea74b4477be54dd052a964a69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:55 [async_llm.py:261] Added request cmpl-98bd88dea74b4477be54dd052a964a69-0.
INFO 03-02 00:22:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:22:56 [logger.py:42] Received request cmpl-087476cae23048d182c6f8f6abbca1c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:56 [async_llm.py:261] Added request cmpl-087476cae23048d182c6f8f6abbca1c9-0.
INFO 03-02 00:22:57 [logger.py:42] Received request cmpl-06edf185477e4eed8eaaba81ab90b5f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:57 [async_llm.py:261] Added request cmpl-06edf185477e4eed8eaaba81ab90b5f3-0.
INFO 03-02 00:22:58 [logger.py:42] Received request cmpl-3f30ee12fe0340539c22dc15aaa7ed07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:58 [async_llm.py:261] Added request cmpl-3f30ee12fe0340539c22dc15aaa7ed07-0.
INFO 03-02 00:22:59 [logger.py:42] Received request cmpl-8551ec6f4c0c4284b4f9ff98f3db5c55-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:59 [async_llm.py:261] Added request cmpl-8551ec6f4c0c4284b4f9ff98f3db5c55-0.
INFO 03-02 00:23:00 [logger.py:42] Received request cmpl-128ca7415c774a668eb4478269c9d90b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:00 [async_llm.py:261] Added request cmpl-128ca7415c774a668eb4478269c9d90b-0.
INFO 03-02 00:23:01 [logger.py:42] Received request cmpl-511d22fbdf8648beb03c8a9c89de9616-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:01 [async_llm.py:261] Added request cmpl-511d22fbdf8648beb03c8a9c89de9616-0.
INFO 03-02 00:23:02 [logger.py:42] Received request cmpl-8060a4a34f6247008d2bc991d5e594c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:02 [async_llm.py:261] Added request cmpl-8060a4a34f6247008d2bc991d5e594c8-0.
INFO 03-02 00:23:03 [logger.py:42] Received request cmpl-57ad1e5af86b45779bb7ed25989bd6b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:03 [async_llm.py:261] Added request cmpl-57ad1e5af86b45779bb7ed25989bd6b4-0.
INFO 03-02 00:23:05 [logger.py:42] Received request cmpl-c0a0395975e84837ae2e05a4d2662865-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:05 [async_llm.py:261] Added request cmpl-c0a0395975e84837ae2e05a4d2662865-0.
INFO 03-02 00:23:06 [logger.py:42] Received request cmpl-46c16e542b0d47a7ad65d95243396912-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:06 [async_llm.py:261] Added request cmpl-46c16e542b0d47a7ad65d95243396912-0.
INFO 03-02 00:23:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:23:07 [logger.py:42] Received request cmpl-eac1a9b5cb624a39a440c6864472b15c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:07 [async_llm.py:261] Added request cmpl-eac1a9b5cb624a39a440c6864472b15c-0.
INFO 03-02 00:23:08 [logger.py:42] Received request cmpl-288a896387e543b199dd547ee6cf07f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:08 [async_llm.py:261] Added request cmpl-288a896387e543b199dd547ee6cf07f7-0.
INFO 03-02 00:23:09 [logger.py:42] Received request cmpl-fc09febcfd374caab3883f665ff12a8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:09 [async_llm.py:261] Added request cmpl-fc09febcfd374caab3883f665ff12a8f-0.
INFO 03-02 00:23:10 [logger.py:42] Received request cmpl-3bf2c228aa114be09de0849599b2aa20-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:10 [async_llm.py:261] Added request cmpl-3bf2c228aa114be09de0849599b2aa20-0.
INFO 03-02 00:23:11 [logger.py:42] Received request cmpl-d201f9a8699f46409f0d57ef165285e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:11 [async_llm.py:261] Added request cmpl-d201f9a8699f46409f0d57ef165285e8-0.
INFO 03-02 00:23:12 [logger.py:42] Received request cmpl-77a0c19679424b6eb69947e565d9949d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:12 [async_llm.py:261] Added request cmpl-77a0c19679424b6eb69947e565d9949d-0.
INFO 03-02 00:23:13 [logger.py:42] Received request cmpl-a2de5fa1297f4a1c960d6d8ce5ad39f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:13 [async_llm.py:261] Added request cmpl-a2de5fa1297f4a1c960d6d8ce5ad39f0-0.
INFO 03-02 00:23:14 [logger.py:42] Received request cmpl-d1f105ac6c4f4a7dbbdce98641e06277-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:14 [async_llm.py:261] Added request cmpl-d1f105ac6c4f4a7dbbdce98641e06277-0.
INFO 03-02 00:23:15 [logger.py:42] Received request cmpl-613cbab74baf41f3aaaca3a5c0fa89da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:15 [async_llm.py:261] Added request cmpl-613cbab74baf41f3aaaca3a5c0fa89da-0.
INFO 03-02 00:23:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:23:16 [logger.py:42] Received request cmpl-9306ad5e1de442c0a600dd9b5bbb5a01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:16 [async_llm.py:261] Added request cmpl-9306ad5e1de442c0a600dd9b5bbb5a01-0.
INFO 03-02 00:23:18 [logger.py:42] Received request cmpl-8fd77be4b12e431fa23a80e5bb2a6dc4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:18 [async_llm.py:261] Added request cmpl-8fd77be4b12e431fa23a80e5bb2a6dc4-0.
INFO 03-02 00:23:19 [logger.py:42] Received request cmpl-21d7351e3e0947abbd9e2f9620d6a837-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:19 [async_llm.py:261] Added request cmpl-21d7351e3e0947abbd9e2f9620d6a837-0.
INFO 03-02 00:23:20 [logger.py:42] Received request cmpl-4707064ec24340d6b7c4ce0b68a4ac93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:20 [async_llm.py:261] Added request cmpl-4707064ec24340d6b7c4ce0b68a4ac93-0.
INFO 03-02 00:23:21 [logger.py:42] Received request cmpl-d4dc4f8299a94cb992aff8ba8f06194d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:21 [async_llm.py:261] Added request cmpl-d4dc4f8299a94cb992aff8ba8f06194d-0.
INFO 03-02 00:23:22 [logger.py:42] Received request cmpl-37ea1d1f522a4b089401540a0b50c649-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:22 [async_llm.py:261] Added request cmpl-37ea1d1f522a4b089401540a0b50c649-0.
INFO 03-02 00:23:23 [logger.py:42] Received request cmpl-10ff0905d73b44b390bef7689f2fcbbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:23 [async_llm.py:261] Added request cmpl-10ff0905d73b44b390bef7689f2fcbbe-0.
INFO 03-02 00:23:24 [logger.py:42] Received request cmpl-3b8d510efdc74c938d05dd93348d7279-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:24 [async_llm.py:261] Added request cmpl-3b8d510efdc74c938d05dd93348d7279-0.
INFO 03-02 00:23:25 [logger.py:42] Received request cmpl-fcef91e0fa784e84a612238afa8374da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:25 [async_llm.py:261] Added request cmpl-fcef91e0fa784e84a612238afa8374da-0.
INFO 03-02 00:23:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:23:26 [logger.py:42] Received request cmpl-deea176227804b1caa9665bd1dd1f504-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:26 [async_llm.py:261] Added request cmpl-deea176227804b1caa9665bd1dd1f504-0.
INFO 03-02 00:23:27 [logger.py:42] Received request cmpl-936616d659724e39afc254814d31f148-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:27 [async_llm.py:261] Added request cmpl-936616d659724e39afc254814d31f148-0.
INFO 03-02 00:23:28 [logger.py:42] Received request cmpl-605bbf8d99c74e488a4bcd32fb16172b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:28 [async_llm.py:261] Added request cmpl-605bbf8d99c74e488a4bcd32fb16172b-0.
INFO 03-02 00:23:29 [logger.py:42] Received request cmpl-db1e02535ebe4e63858f5361da576708-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:29 [async_llm.py:261] Added request cmpl-db1e02535ebe4e63858f5361da576708-0.
INFO 03-02 00:23:31 [logger.py:42] Received request cmpl-9baea32a88de41a7a9b81e9292ae0af8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:31 [async_llm.py:261] Added request cmpl-9baea32a88de41a7a9b81e9292ae0af8-0.
INFO 03-02 00:23:32 [logger.py:42] Received request cmpl-ba0623bbcb4c4131a2a76ccfb2c9f8b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:32 [async_llm.py:261] Added request cmpl-ba0623bbcb4c4131a2a76ccfb2c9f8b4-0.
INFO 03-02 00:23:33 [logger.py:42] Received request cmpl-c091e70dab37447fad22af7cf4ee2085-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:33 [async_llm.py:261] Added request cmpl-c091e70dab37447fad22af7cf4ee2085-0.
INFO 03-02 00:23:34 [logger.py:42] Received request cmpl-d3961c37455e4ab8a2d3a890751747d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:34 [async_llm.py:261] Added request cmpl-d3961c37455e4ab8a2d3a890751747d2-0.
INFO 03-02 00:23:35 [logger.py:42] Received request cmpl-477f5dfc729741f68a8ebe612840f46e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:35 [async_llm.py:261] Added request cmpl-477f5dfc729741f68a8ebe612840f46e-0.
INFO 03-02 00:23:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:23:36 [logger.py:42] Received request cmpl-2d9d6ba02d5f4c128eab56fe46306a55-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:36 [async_llm.py:261] Added request cmpl-2d9d6ba02d5f4c128eab56fe46306a55-0.
INFO 03-02 00:23:37 [logger.py:42] Received request cmpl-c6ac86e1f385452fad93a45d1d6f72a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:37 [async_llm.py:261] Added request cmpl-c6ac86e1f385452fad93a45d1d6f72a5-0.
INFO 03-02 00:23:38 [logger.py:42] Received request cmpl-cfefc940804e4c9781628eb98487b09b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:38 [async_llm.py:261] Added request cmpl-cfefc940804e4c9781628eb98487b09b-0.
INFO 03-02 00:23:39 [logger.py:42] Received request cmpl-78bfb12df85f49f0bf9fb1bebfda8d18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:39 [async_llm.py:261] Added request cmpl-78bfb12df85f49f0bf9fb1bebfda8d18-0.
INFO 03-02 00:23:40 [logger.py:42] Received request cmpl-6b38d2e00fa94dacb610eb844d21c70a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:40 [async_llm.py:261] Added request cmpl-6b38d2e00fa94dacb610eb844d21c70a-0.
INFO 03-02 00:23:41 [logger.py:42] Received request cmpl-74d0ee3d9eec462ca7fb7b462d764a96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:41 [async_llm.py:261] Added request cmpl-74d0ee3d9eec462ca7fb7b462d764a96-0.
INFO 03-02 00:23:42 [logger.py:42] Received request cmpl-35a6e1c838ff4ebaac69c104bd27dc6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:42 [async_llm.py:261] Added request cmpl-35a6e1c838ff4ebaac69c104bd27dc6a-0.
INFO 03-02 00:23:44 [logger.py:42] Received request cmpl-c3487880cb3448f2ba1b8890d5e704b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:44 [async_llm.py:261] Added request cmpl-c3487880cb3448f2ba1b8890d5e704b0-0.
INFO 03-02 00:23:45 [logger.py:42] Received request cmpl-f9f36ac6f8274631968c3e2278d368c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:45 [async_llm.py:261] Added request cmpl-f9f36ac6f8274631968c3e2278d368c3-0.
INFO 03-02 00:23:46 [logger.py:42] Received request cmpl-5b104aeafdc5401bb09fad1ab782201d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:46 [async_llm.py:261] Added request cmpl-5b104aeafdc5401bb09fad1ab782201d-0.
INFO 03-02 00:23:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:23:47 [logger.py:42] Received request cmpl-f75d9a308803443aabdeb15879059f47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:47 [async_llm.py:261] Added request cmpl-f75d9a308803443aabdeb15879059f47-0.
INFO 03-02 00:23:48 [logger.py:42] Received request cmpl-49d78552d8824d2997bee30560222304-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:48 [async_llm.py:261] Added request cmpl-49d78552d8824d2997bee30560222304-0.
INFO 03-02 00:23:49 [logger.py:42] Received request cmpl-8edcb42cfd04473881354bcbc99d05ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:49 [async_llm.py:261] Added request cmpl-8edcb42cfd04473881354bcbc99d05ec-0.
INFO 03-02 00:23:50 [logger.py:42] Received request cmpl-499f00c8fce24c97a6a18584ff02dcaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:50 [async_llm.py:261] Added request cmpl-499f00c8fce24c97a6a18584ff02dcaf-0.
INFO 03-02 00:23:51 [logger.py:42] Received request cmpl-1656f0c07aa0458e895afd708cf906a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:51 [async_llm.py:261] Added request cmpl-1656f0c07aa0458e895afd708cf906a4-0.
INFO 03-02 00:23:52 [logger.py:42] Received request cmpl-40d98c6304784a0dac3840fbbd301afc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:52 [async_llm.py:261] Added request cmpl-40d98c6304784a0dac3840fbbd301afc-0.
INFO 03-02 00:23:53 [logger.py:42] Received request cmpl-e2ff3ed9137245e0aebc857e8cbc625b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:53 [async_llm.py:261] Added request cmpl-e2ff3ed9137245e0aebc857e8cbc625b-0.
INFO 03-02 00:23:54 [logger.py:42] Received request cmpl-38983f3cdb7e48a8b7ec75af1741bd54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:54 [async_llm.py:261] Added request cmpl-38983f3cdb7e48a8b7ec75af1741bd54-0.
INFO 03-02 00:23:55 [logger.py:42] Received request cmpl-6b4a91a0b0b7421a96b837d1129d585a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:55 [async_llm.py:261] Added request cmpl-6b4a91a0b0b7421a96b837d1129d585a-0.
INFO 03-02 00:23:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:23:57 [logger.py:42] Received request cmpl-966891e235164fc8b38cd256677ff157-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:57 [async_llm.py:261] Added request cmpl-966891e235164fc8b38cd256677ff157-0.
INFO 03-02 00:23:58 [logger.py:42] Received request cmpl-d192fc4013534afbae7a191cfa78ac65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:58 [async_llm.py:261] Added request cmpl-d192fc4013534afbae7a191cfa78ac65-0.
INFO 03-02 00:23:59 [logger.py:42] Received request cmpl-d869ebbde58842ccb343492ac1ec2259-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:59 [async_llm.py:261] Added request cmpl-d869ebbde58842ccb343492ac1ec2259-0.
INFO 03-02 00:24:00 [logger.py:42] Received request cmpl-df7f707b8f4d47a0ac29cc13fad09843-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:00 [async_llm.py:261] Added request cmpl-df7f707b8f4d47a0ac29cc13fad09843-0.
INFO 03-02 00:24:01 [logger.py:42] Received request cmpl-63cc6eb17b204b9a95a066a480b77199-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:01 [async_llm.py:261] Added request cmpl-63cc6eb17b204b9a95a066a480b77199-0.
INFO 03-02 00:24:02 [logger.py:42] Received request cmpl-607e6f5a413b44c083cc824b23eecf29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:02 [async_llm.py:261] Added request cmpl-607e6f5a413b44c083cc824b23eecf29-0.
INFO 03-02 00:24:03 [logger.py:42] Received request cmpl-56930e48a5144c9a9575ced3cd038aab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:03 [async_llm.py:261] Added request cmpl-56930e48a5144c9a9575ced3cd038aab-0.
INFO 03-02 00:24:04 [logger.py:42] Received request cmpl-246101c91cdc43fbb930164df3db6153-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:04 [async_llm.py:261] Added request cmpl-246101c91cdc43fbb930164df3db6153-0.
INFO 03-02 00:24:05 [logger.py:42] Received request cmpl-9ee24a73d3b7460ba53c8f9c08445ca1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:05 [async_llm.py:261] Added request cmpl-9ee24a73d3b7460ba53c8f9c08445ca1-0.
INFO 03-02 00:24:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:24:06 [logger.py:42] Received request cmpl-31a4c23b402e41368b6dbaad89998a5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:06 [async_llm.py:261] Added request cmpl-31a4c23b402e41368b6dbaad89998a5d-0.
INFO 03-02 00:24:07 [logger.py:42] Received request cmpl-2fabe9ecab9243fda8fd6d6572488e70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:07 [async_llm.py:261] Added request cmpl-2fabe9ecab9243fda8fd6d6572488e70-0.
INFO 03-02 00:24:08 [logger.py:42] Received request cmpl-0e5deb041b1d407e811ffeca37c29b9a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:08 [async_llm.py:261] Added request cmpl-0e5deb041b1d407e811ffeca37c29b9a-0.
INFO 03-02 00:24:10 [logger.py:42] Received request cmpl-0dd1ac5493de4fd4babd594b9f92b3d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:10 [async_llm.py:261] Added request cmpl-0dd1ac5493de4fd4babd594b9f92b3d0-0.
INFO 03-02 00:24:11 [logger.py:42] Received request cmpl-d103b2f099de42329c1e12bee54e16f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:11 [async_llm.py:261] Added request cmpl-d103b2f099de42329c1e12bee54e16f5-0.
INFO 03-02 00:24:12 [logger.py:42] Received request cmpl-2a3c3255b0264d8dbefa1642b4af5a5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:12 [async_llm.py:261] Added request cmpl-2a3c3255b0264d8dbefa1642b4af5a5e-0.
INFO 03-02 00:24:13 [logger.py:42] Received request cmpl-763d6f985eaf410cbfe64e3c88f0c6c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:13 [async_llm.py:261] Added request cmpl-763d6f985eaf410cbfe64e3c88f0c6c1-0.
INFO 03-02 00:24:14 [logger.py:42] Received request cmpl-cb78ba935a1b469d86e2291f00e90ae2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:14 [async_llm.py:261] Added request cmpl-cb78ba935a1b469d86e2291f00e90ae2-0.
INFO 03-02 00:24:15 [logger.py:42] Received request cmpl-d431a9e1654d44c485f782e7be3823b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:15 [async_llm.py:261] Added request cmpl-d431a9e1654d44c485f782e7be3823b9-0.
INFO 03-02 00:24:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:24:16 [logger.py:42] Received request cmpl-cb6a50f3feee4ee4a967de21d0ec5d2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:16 [async_llm.py:261] Added request cmpl-cb6a50f3feee4ee4a967de21d0ec5d2f-0.
INFO 03-02 00:24:17 [logger.py:42] Received request cmpl-7bd4e0f3955d42338a0c246ccd93d03b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:17 [async_llm.py:261] Added request cmpl-7bd4e0f3955d42338a0c246ccd93d03b-0.
INFO 03-02 00:24:18 [logger.py:42] Received request cmpl-ed45f3e261e0453996e1c4ef37f4110d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:18 [async_llm.py:261] Added request cmpl-ed45f3e261e0453996e1c4ef37f4110d-0.
INFO 03-02 00:24:19 [logger.py:42] Received request cmpl-ae3b130fb9404d52a62d2edaf92a5fc8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:19 [async_llm.py:261] Added request cmpl-ae3b130fb9404d52a62d2edaf92a5fc8-0.
INFO 03-02 00:24:20 [logger.py:42] Received request cmpl-d14af6390ba145f7a184bdd17ec3061e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:20 [async_llm.py:261] Added request cmpl-d14af6390ba145f7a184bdd17ec3061e-0.
INFO 03-02 00:24:21 [logger.py:42] Received request cmpl-15a780ab2ab24090adfc9766aa9494d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:21 [async_llm.py:261] Added request cmpl-15a780ab2ab24090adfc9766aa9494d6-0.
INFO 03-02 00:24:23 [logger.py:42] Received request cmpl-3ec5b968cec84352b33f0f982f1fbeed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:23 [async_llm.py:261] Added request cmpl-3ec5b968cec84352b33f0f982f1fbeed-0.
INFO 03-02 00:24:24 [logger.py:42] Received request cmpl-3ef965f7acb3414cb66083b06ae40d81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:24 [async_llm.py:261] Added request cmpl-3ef965f7acb3414cb66083b06ae40d81-0.
INFO 03-02 00:24:25 [logger.py:42] Received request cmpl-b4c76fd424c24c259a33a4238c9a7862-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:25 [async_llm.py:261] Added request cmpl-b4c76fd424c24c259a33a4238c9a7862-0.
INFO 03-02 00:24:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:24:26 [logger.py:42] Received request cmpl-69d9ed82e14448a1ae62e2da3f5c60b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:26 [async_llm.py:261] Added request cmpl-69d9ed82e14448a1ae62e2da3f5c60b6-0.
INFO 03-02 00:24:27 [logger.py:42] Received request cmpl-8c65636f734640ce81a3013368dc2194-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:27 [async_llm.py:261] Added request cmpl-8c65636f734640ce81a3013368dc2194-0.
INFO 03-02 00:24:28 [logger.py:42] Received request cmpl-2680d57bb66b40d1abe9de44320e7b14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:28 [async_llm.py:261] Added request cmpl-2680d57bb66b40d1abe9de44320e7b14-0.
INFO 03-02 00:24:29 [logger.py:42] Received request cmpl-9a09a1d56c3d466c9bf3758e5b9b1c18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:29 [async_llm.py:261] Added request cmpl-9a09a1d56c3d466c9bf3758e5b9b1c18-0.
INFO 03-02 00:24:30 [logger.py:42] Received request cmpl-81bb0250b4ce48d0a7aef7a8b5a8f8aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:30 [async_llm.py:261] Added request cmpl-81bb0250b4ce48d0a7aef7a8b5a8f8aa-0.
INFO 03-02 00:24:31 [logger.py:42] Received request cmpl-8b81864cea0148c2911f438b75a77edc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:31 [async_llm.py:261] Added request cmpl-8b81864cea0148c2911f438b75a77edc-0.
INFO 03-02 00:24:32 [logger.py:42] Received request cmpl-c71b47586d59451b806e10184eeb6f37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:32 [async_llm.py:261] Added request cmpl-c71b47586d59451b806e10184eeb6f37-0.
INFO 03-02 00:24:33 [logger.py:42] Received request cmpl-eda68efcd4644e3fad21e6b3f5a4f3af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:33 [async_llm.py:261] Added request cmpl-eda68efcd4644e3fad21e6b3f5a4f3af-0.
INFO 03-02 00:24:34 [logger.py:42] Received request cmpl-a5acdff75ca748b4adac0aa68472fa67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:34 [async_llm.py:261] Added request cmpl-a5acdff75ca748b4adac0aa68472fa67-0.
INFO 03-02 00:24:36 [logger.py:42] Received request cmpl-1eaee138eef1428f9a7b0d5291fd4fa6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:36 [async_llm.py:261] Added request cmpl-1eaee138eef1428f9a7b0d5291fd4fa6-0.
INFO 03-02 00:24:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:24:37 [logger.py:42] Received request cmpl-9e678dddbeb24e47a56e2ba0f69678f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:37 [async_llm.py:261] Added request cmpl-9e678dddbeb24e47a56e2ba0f69678f4-0.
INFO 03-02 00:24:38 [logger.py:42] Received request cmpl-6c47f005570f403c9612802783325e08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:38 [async_llm.py:261] Added request cmpl-6c47f005570f403c9612802783325e08-0.
INFO 03-02 00:24:39 [logger.py:42] Received request cmpl-c6f6ce8669264317ba9bfe95d41d3e0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:39 [async_llm.py:261] Added request cmpl-c6f6ce8669264317ba9bfe95d41d3e0a-0.
INFO 03-02 00:24:40 [logger.py:42] Received request cmpl-8a36b03ac25d4f77be3ebd52ea9c38c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:40 [async_llm.py:261] Added request cmpl-8a36b03ac25d4f77be3ebd52ea9c38c7-0.
INFO 03-02 00:24:41 [logger.py:42] Received request cmpl-edf88346256f4d35aa5041d91daf3cb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:41 [async_llm.py:261] Added request cmpl-edf88346256f4d35aa5041d91daf3cb6-0.
INFO 03-02 00:24:42 [logger.py:42] Received request cmpl-f9e73747eccb43eeb33ae29b6267d832-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:42 [async_llm.py:261] Added request cmpl-f9e73747eccb43eeb33ae29b6267d832-0.
INFO 03-02 00:24:43 [logger.py:42] Received request cmpl-19133047916149cfb7357fb5637c233f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:43 [async_llm.py:261] Added request cmpl-19133047916149cfb7357fb5637c233f-0.
INFO 03-02 00:24:44 [logger.py:42] Received request cmpl-525c9b15c84348efa0e9e7d19060d4d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:44 [async_llm.py:261] Added request cmpl-525c9b15c84348efa0e9e7d19060d4d8-0.
INFO 03-02 00:24:45 [logger.py:42] Received request cmpl-baa73576158a4bec82b215f534d8da73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:45 [async_llm.py:261] Added request cmpl-baa73576158a4bec82b215f534d8da73-0.
INFO 03-02 00:24:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:24:46 [logger.py:42] Received request cmpl-456da4a3811641db9e485c78ebea0d38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:46 [async_llm.py:261] Added request cmpl-456da4a3811641db9e485c78ebea0d38-0.
INFO 03-02 00:24:47 [logger.py:42] Received request cmpl-62e4425be4f449c6a7f573dd087b8908-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:47 [async_llm.py:261] Added request cmpl-62e4425be4f449c6a7f573dd087b8908-0.
INFO 03-02 00:24:49 [logger.py:42] Received request cmpl-a897a0a9b1654a189c9707baeb147965-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:49 [async_llm.py:261] Added request cmpl-a897a0a9b1654a189c9707baeb147965-0.
INFO 03-02 00:24:50 [logger.py:42] Received request cmpl-cb5e16c1352d42fa89cb22615fbd84bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:50 [async_llm.py:261] Added request cmpl-cb5e16c1352d42fa89cb22615fbd84bd-0.
INFO 03-02 00:24:51 [logger.py:42] Received request cmpl-a7ee2d3db04943b8a8b174c56d58eb43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:51 [async_llm.py:261] Added request cmpl-a7ee2d3db04943b8a8b174c56d58eb43-0.
INFO 03-02 00:24:52 [logger.py:42] Received request cmpl-4cd94055f40c42a6888163564ba97a26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:52 [async_llm.py:261] Added request cmpl-4cd94055f40c42a6888163564ba97a26-0.
INFO 03-02 00:24:53 [logger.py:42] Received request cmpl-50c03a8801224e97bde86b31aa9aadc5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:53 [async_llm.py:261] Added request cmpl-50c03a8801224e97bde86b31aa9aadc5-0.
INFO 03-02 00:24:54 [logger.py:42] Received request cmpl-b0e0a140a4ed4ccabdb0d39b3d613618-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:54 [async_llm.py:261] Added request cmpl-b0e0a140a4ed4ccabdb0d39b3d613618-0.
INFO 03-02 00:24:55 [logger.py:42] Received request cmpl-339e64d5ec3442cf968f5e029ad3dd73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:55 [async_llm.py:261] Added request cmpl-339e64d5ec3442cf968f5e029ad3dd73-0.
INFO 03-02 00:24:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:24:56 [logger.py:42] Received request cmpl-819746f2620f46bdb067ae9dec577342-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:56 [async_llm.py:261] Added request cmpl-819746f2620f46bdb067ae9dec577342-0.
INFO 03-02 00:24:57 [logger.py:42] Received request cmpl-8a049d7f0f624a3497c18d554dd4db9f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:57 [async_llm.py:261] Added request cmpl-8a049d7f0f624a3497c18d554dd4db9f-0.
INFO 03-02 00:24:58 [logger.py:42] Received request cmpl-faab00bc68f04960aa4d2eee7656cf44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:58 [async_llm.py:261] Added request cmpl-faab00bc68f04960aa4d2eee7656cf44-0.
INFO 03-02 00:24:59 [logger.py:42] Received request cmpl-2416bd08ee6a45f5b1f5b9e9b662ff5c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:59 [async_llm.py:261] Added request cmpl-2416bd08ee6a45f5b1f5b9e9b662ff5c-0.
INFO 03-02 00:25:01 [logger.py:42] Received request cmpl-ea4a5d7439ab4636ab3d30193c5f0b5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:01 [async_llm.py:261] Added request cmpl-ea4a5d7439ab4636ab3d30193c5f0b5b-0.
INFO 03-02 00:25:02 [logger.py:42] Received request cmpl-c279c8ef175d4f2599bc63a43a0f185b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:02 [async_llm.py:261] Added request cmpl-c279c8ef175d4f2599bc63a43a0f185b-0.
INFO 03-02 00:25:03 [logger.py:42] Received request cmpl-3750e7dcb96d42b6bac167b56a6fdda9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:03 [async_llm.py:261] Added request cmpl-3750e7dcb96d42b6bac167b56a6fdda9-0.
INFO 03-02 00:25:04 [logger.py:42] Received request cmpl-a75e33fcab904d428d61053c8789e305-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:04 [async_llm.py:261] Added request cmpl-a75e33fcab904d428d61053c8789e305-0.
INFO 03-02 00:25:05 [logger.py:42] Received request cmpl-ec57f2bd52e84083a4d6e71e9e0f5953-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:05 [async_llm.py:261] Added request cmpl-ec57f2bd52e84083a4d6e71e9e0f5953-0.
INFO 03-02 00:25:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:25:06 [logger.py:42] Received request cmpl-ef6b7b9afec74b7aa1a0bf15aebb103f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:06 [async_llm.py:261] Added request cmpl-ef6b7b9afec74b7aa1a0bf15aebb103f-0.
INFO 03-02 00:25:07 [logger.py:42] Received request cmpl-1e36463c005d480e84940b723f9934a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:07 [async_llm.py:261] Added request cmpl-1e36463c005d480e84940b723f9934a6-0.
INFO 03-02 00:25:08 [logger.py:42] Received request cmpl-c410b08b36c3445cb43b187c8f779ba7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:08 [async_llm.py:261] Added request cmpl-c410b08b36c3445cb43b187c8f779ba7-0.
INFO 03-02 00:25:09 [logger.py:42] Received request cmpl-f2d2d0b3e2e94f749fe6a3801019bc8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:09 [async_llm.py:261] Added request cmpl-f2d2d0b3e2e94f749fe6a3801019bc8b-0.
INFO 03-02 00:25:10 [logger.py:42] Received request cmpl-fa1e8ff7f7974ca5b929d686c836018b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:10 [async_llm.py:261] Added request cmpl-fa1e8ff7f7974ca5b929d686c836018b-0.
INFO 03-02 00:25:11 [logger.py:42] Received request cmpl-3a2232eba8574a7c8eec23fdd7066673-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:11 [async_llm.py:261] Added request cmpl-3a2232eba8574a7c8eec23fdd7066673-0.
INFO 03-02 00:25:12 [logger.py:42] Received request cmpl-d459a500a1e6485ebff6949396c61e57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:12 [async_llm.py:261] Added request cmpl-d459a500a1e6485ebff6949396c61e57-0.
INFO 03-02 00:25:14 [logger.py:42] Received request cmpl-0eeea6f11b3b44ab83807d68caf822bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:14 [async_llm.py:261] Added request cmpl-0eeea6f11b3b44ab83807d68caf822bf-0.
INFO 03-02 00:25:15 [logger.py:42] Received request cmpl-d1a4e9bee81140b6ad3de3ba16411a94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:15 [async_llm.py:261] Added request cmpl-d1a4e9bee81140b6ad3de3ba16411a94-0.
INFO 03-02 00:25:16 [logger.py:42] Received request cmpl-207db31a1c1b402a928826dbab2f161b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:16 [async_llm.py:261] Added request cmpl-207db31a1c1b402a928826dbab2f161b-0.
INFO 03-02 00:25:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:25:17 [logger.py:42] Received request cmpl-f209f14bac694effb41e846aa926cfe5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:17 [async_llm.py:261] Added request cmpl-f209f14bac694effb41e846aa926cfe5-0.
INFO 03-02 00:25:18 [logger.py:42] Received request cmpl-5041664aa26f4526bb8c5f3c2d419f7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:18 [async_llm.py:261] Added request cmpl-5041664aa26f4526bb8c5f3c2d419f7e-0.
INFO 03-02 00:25:19 [logger.py:42] Received request cmpl-6cab77bbc1784782ad833729024a1453-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:19 [async_llm.py:261] Added request cmpl-6cab77bbc1784782ad833729024a1453-0.
INFO 03-02 00:25:20 [logger.py:42] Received request cmpl-576e18e24fd84845b7a13fe39ac09c6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:20 [async_llm.py:261] Added request cmpl-576e18e24fd84845b7a13fe39ac09c6c-0.
INFO 03-02 00:25:21 [logger.py:42] Received request cmpl-6bf3b6714f654d999b51f6a111520db2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:21 [async_llm.py:261] Added request cmpl-6bf3b6714f654d999b51f6a111520db2-0.
INFO 03-02 00:25:22 [logger.py:42] Received request cmpl-8786428f438a4325a6d68e393c165e1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:22 [async_llm.py:261] Added request cmpl-8786428f438a4325a6d68e393c165e1e-0.
INFO 03-02 00:25:23 [logger.py:42] Received request cmpl-68ea2293ee3b4a0c87dfbb828a3ddd25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:23 [async_llm.py:261] Added request cmpl-68ea2293ee3b4a0c87dfbb828a3ddd25-0.
INFO 03-02 00:25:24 [logger.py:42] Received request cmpl-a6e27246c49c439a8ef09d660c7068ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:24 [async_llm.py:261] Added request cmpl-a6e27246c49c439a8ef09d660c7068ef-0.
INFO 03-02 00:25:25 [logger.py:42] Received request cmpl-022dc41d8a9b44c1b57903b9a3a3e48e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:25 [async_llm.py:261] Added request cmpl-022dc41d8a9b44c1b57903b9a3a3e48e-0.
INFO 03-02 00:25:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:25:27 [logger.py:42] Received request cmpl-923b16c0861c493eb8623e1b3d8fab6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:27 [async_llm.py:261] Added request cmpl-923b16c0861c493eb8623e1b3d8fab6e-0.
INFO 03-02 00:25:28 [logger.py:42] Received request cmpl-caa795e31e4e478bb0c2ab0cc3bc5be4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:28 [async_llm.py:261] Added request cmpl-caa795e31e4e478bb0c2ab0cc3bc5be4-0.
INFO 03-02 00:25:29 [logger.py:42] Received request cmpl-a8ceb134f8af485a8ed65fef54314096-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:29 [async_llm.py:261] Added request cmpl-a8ceb134f8af485a8ed65fef54314096-0.
INFO 03-02 00:25:30 [logger.py:42] Received request cmpl-766fed3c99334a77a5550341a59c2736-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:30 [async_llm.py:261] Added request cmpl-766fed3c99334a77a5550341a59c2736-0.
INFO 03-02 00:25:31 [logger.py:42] Received request cmpl-c26eaa5cf69f4a8fb848ee6459af54af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:31 [async_llm.py:261] Added request cmpl-c26eaa5cf69f4a8fb848ee6459af54af-0.
INFO 03-02 00:25:32 [logger.py:42] Received request cmpl-7595b46a339e4b1ca68b75d0e52ee248-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:32 [async_llm.py:261] Added request cmpl-7595b46a339e4b1ca68b75d0e52ee248-0.
INFO 03-02 00:25:33 [logger.py:42] Received request cmpl-2789680abe2d4fd2862d749fddbc2ffb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:33 [async_llm.py:261] Added request cmpl-2789680abe2d4fd2862d749fddbc2ffb-0.
INFO 03-02 00:25:34 [logger.py:42] Received request cmpl-951b7572c84943ddac6157f766662020-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:34 [async_llm.py:261] Added request cmpl-951b7572c84943ddac6157f766662020-0.
INFO 03-02 00:25:35 [logger.py:42] Received request cmpl-9987ec65f5de418c90c907f47020a174-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:35 [async_llm.py:261] Added request cmpl-9987ec65f5de418c90c907f47020a174-0.
INFO 03-02 00:25:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:25:36 [logger.py:42] Received request cmpl-8a4f7195c2974e6d82d055a3e3137cb1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:36 [async_llm.py:261] Added request cmpl-8a4f7195c2974e6d82d055a3e3137cb1-0.
INFO 03-02 00:25:37 [logger.py:42] Received request cmpl-a2cd1e88dec64669b0da90b5e9882961-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:37 [async_llm.py:261] Added request cmpl-a2cd1e88dec64669b0da90b5e9882961-0.
INFO 03-02 00:25:38 [logger.py:42] Received request cmpl-02dfc1cbee5a49cdb14dd1809e187b74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:38 [async_llm.py:261] Added request cmpl-02dfc1cbee5a49cdb14dd1809e187b74-0.
INFO 03-02 00:25:40 [logger.py:42] Received request cmpl-6e4bc4fbf79b4552a446cbf60f7eebe2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:40 [async_llm.py:261] Added request cmpl-6e4bc4fbf79b4552a446cbf60f7eebe2-0.
INFO 03-02 00:25:41 [logger.py:42] Received request cmpl-d6e276ad39f6480b9be77c43868e7bf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:41 [async_llm.py:261] Added request cmpl-d6e276ad39f6480b9be77c43868e7bf4-0.
INFO 03-02 00:25:42 [logger.py:42] Received request cmpl-2e5865acfb4f4d178e28fcdd640ecaa7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:42 [async_llm.py:261] Added request cmpl-2e5865acfb4f4d178e28fcdd640ecaa7-0.
INFO 03-02 00:25:43 [logger.py:42] Received request cmpl-51fc63180f3e4159bb01473ca4df659b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:43 [async_llm.py:261] Added request cmpl-51fc63180f3e4159bb01473ca4df659b-0.
INFO 03-02 00:25:44 [logger.py:42] Received request cmpl-ea48eb221aa24ce7b70e15cc0435d86d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:44 [async_llm.py:261] Added request cmpl-ea48eb221aa24ce7b70e15cc0435d86d-0.
INFO 03-02 00:25:45 [logger.py:42] Received request cmpl-5ba7daf9e0a54a088a8e1d93284c90a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:45 [async_llm.py:261] Added request cmpl-5ba7daf9e0a54a088a8e1d93284c90a1-0.
INFO 03-02 00:25:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:25:46 [logger.py:42] Received request cmpl-030eabb72ef947e09cedccb8052fd82f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:46 [async_llm.py:261] Added request cmpl-030eabb72ef947e09cedccb8052fd82f-0.
INFO 03-02 00:25:47 [logger.py:42] Received request cmpl-30959a9f733b4067bce50d149b047fb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:47 [async_llm.py:261] Added request cmpl-30959a9f733b4067bce50d149b047fb5-0.
INFO 03-02 00:25:48 [logger.py:42] Received request cmpl-f615e07b89904db1b7388592f7b892a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:48 [async_llm.py:261] Added request cmpl-f615e07b89904db1b7388592f7b892a9-0.
INFO 03-02 00:25:49 [logger.py:42] Received request cmpl-b56a0c195cbe435f8376d869d921d6be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:49 [async_llm.py:261] Added request cmpl-b56a0c195cbe435f8376d869d921d6be-0.
INFO 03-02 00:25:50 [logger.py:42] Received request cmpl-9d296b838354443e93e8419e5f29daeb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:50 [async_llm.py:261] Added request cmpl-9d296b838354443e93e8419e5f29daeb-0.
INFO 03-02 00:25:51 [logger.py:42] Received request cmpl-c764e47dbd6941ce873b21a68d31b533-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:51 [async_llm.py:261] Added request cmpl-c764e47dbd6941ce873b21a68d31b533-0.
INFO 03-02 00:25:53 [logger.py:42] Received request cmpl-f2c2c48c3b45451a924af24973d62598-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:53 [async_llm.py:261] Added request cmpl-f2c2c48c3b45451a924af24973d62598-0.
INFO 03-02 00:25:54 [logger.py:42] Received request cmpl-6552b47bbc7e41adbebe38aea14aac8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:54 [async_llm.py:261] Added request cmpl-6552b47bbc7e41adbebe38aea14aac8a-0.
INFO 03-02 00:25:55 [logger.py:42] Received request cmpl-19c5f5c3cdbf478ba4e126bdec9c7beb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:55 [async_llm.py:261] Added request cmpl-19c5f5c3cdbf478ba4e126bdec9c7beb-0.
INFO 03-02 00:25:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:25:56 [logger.py:42] Received request cmpl-c03f3c4a69d44706aa00672e6d5bcf12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:56 [async_llm.py:261] Added request cmpl-c03f3c4a69d44706aa00672e6d5bcf12-0.
INFO 03-02 00:25:57 [logger.py:42] Received request cmpl-ea4b0eb8fcc5414c8e06f1e7f57d03a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:57 [async_llm.py:261] Added request cmpl-ea4b0eb8fcc5414c8e06f1e7f57d03a3-0.
INFO 03-02 00:25:58 [logger.py:42] Received request cmpl-25dabb80d8ac427d8ad3bf20f4ccfae6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:58 [async_llm.py:261] Added request cmpl-25dabb80d8ac427d8ad3bf20f4ccfae6-0.
INFO 03-02 00:25:59 [logger.py:42] Received request cmpl-29f92896ccea40f89c26c5eb8d26a4b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:59 [async_llm.py:261] Added request cmpl-29f92896ccea40f89c26c5eb8d26a4b6-0.
INFO 03-02 00:26:00 [logger.py:42] Received request cmpl-ea7195dd6610413eaca90f13447977f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:00 [async_llm.py:261] Added request cmpl-ea7195dd6610413eaca90f13447977f2-0.
INFO 03-02 00:26:01 [logger.py:42] Received request cmpl-4acb8e2721f04b8aad2877f4154743c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:01 [async_llm.py:261] Added request cmpl-4acb8e2721f04b8aad2877f4154743c6-0.
INFO 03-02 00:26:02 [logger.py:42] Received request cmpl-aecc25055b2242b5b5ddaf1f1f5c4024-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:02 [async_llm.py:261] Added request cmpl-aecc25055b2242b5b5ddaf1f1f5c4024-0.
INFO 03-02 00:26:03 [logger.py:42] Received request cmpl-0d8a6558617842b795591de5796766f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:03 [async_llm.py:261] Added request cmpl-0d8a6558617842b795591de5796766f3-0.
INFO 03-02 00:26:04 [logger.py:42] Received request cmpl-d71f7363db13444898e52bbba95e2d2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:04 [async_llm.py:261] Added request cmpl-d71f7363db13444898e52bbba95e2d2f-0.
INFO 03-02 00:26:06 [logger.py:42] Received request cmpl-80a44e771b9f4b008d587675132702ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:06 [async_llm.py:261] Added request cmpl-80a44e771b9f4b008d587675132702ee-0.
INFO 03-02 00:26:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:26:07 [logger.py:42] Received request cmpl-ee2a0e29af6446819c27a0f188e65a0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:07 [async_llm.py:261] Added request cmpl-ee2a0e29af6446819c27a0f188e65a0a-0.
INFO 03-02 00:26:08 [logger.py:42] Received request cmpl-092b7393f6964a0f9e67e9908ef2c2cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:08 [async_llm.py:261] Added request cmpl-092b7393f6964a0f9e67e9908ef2c2cd-0.
INFO 03-02 00:26:09 [logger.py:42] Received request cmpl-683b54835d7e49c7bf8379e511212e21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:09 [async_llm.py:261] Added request cmpl-683b54835d7e49c7bf8379e511212e21-0.
INFO 03-02 00:26:10 [logger.py:42] Received request cmpl-de780371a1894170af6e4be08835c51f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:10 [async_llm.py:261] Added request cmpl-de780371a1894170af6e4be08835c51f-0.
INFO 03-02 00:26:11 [logger.py:42] Received request cmpl-17c82365e0b24a10b854ad9a6dd2ecde-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:11 [async_llm.py:261] Added request cmpl-17c82365e0b24a10b854ad9a6dd2ecde-0.
INFO 03-02 00:26:12 [logger.py:42] Received request cmpl-b95df2e367bd455282bd8e04922375d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:12 [async_llm.py:261] Added request cmpl-b95df2e367bd455282bd8e04922375d9-0.
INFO 03-02 00:26:13 [logger.py:42] Received request cmpl-f51c2a3d7cf84d6b8255c5557c6e6898-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:13 [async_llm.py:261] Added request cmpl-f51c2a3d7cf84d6b8255c5557c6e6898-0.
INFO 03-02 00:26:14 [logger.py:42] Received request cmpl-b69f91f9ab2a4e36bcd50428c27f17af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:14 [async_llm.py:261] Added request cmpl-b69f91f9ab2a4e36bcd50428c27f17af-0.
INFO 03-02 00:26:15 [logger.py:42] Received request cmpl-45a49da502a547fc92016e616cbf88e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:15 [async_llm.py:261] Added request cmpl-45a49da502a547fc92016e616cbf88e2-0.
INFO 03-02 00:26:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:26:16 [logger.py:42] Received request cmpl-451f6922b36b4876a18484ffeb579cfb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:16 [async_llm.py:261] Added request cmpl-451f6922b36b4876a18484ffeb579cfb-0.
INFO 03-02 00:26:17 [logger.py:42] Received request cmpl-079db7a42762463e8f3f797b7235bff0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:17 [async_llm.py:261] Added request cmpl-079db7a42762463e8f3f797b7235bff0-0.
INFO 03-02 00:26:19 [logger.py:42] Received request cmpl-d5e7e674aefa49339345e14dc658b506-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:19 [async_llm.py:261] Added request cmpl-d5e7e674aefa49339345e14dc658b506-0.
INFO 03-02 00:26:20 [logger.py:42] Received request cmpl-835c4bbd790845e7825c91f946650d47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:20 [async_llm.py:261] Added request cmpl-835c4bbd790845e7825c91f946650d47-0.
INFO 03-02 00:26:21 [logger.py:42] Received request cmpl-1ccce2bf7eb547fcbf967d50c7ccbfa2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:21 [async_llm.py:261] Added request cmpl-1ccce2bf7eb547fcbf967d50c7ccbfa2-0.
INFO 03-02 00:26:22 [logger.py:42] Received request cmpl-0f6f4a081ed44b7abc7dd1ea511fcab7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:22 [async_llm.py:261] Added request cmpl-0f6f4a081ed44b7abc7dd1ea511fcab7-0.
INFO 03-02 00:26:23 [logger.py:42] Received request cmpl-6fe64b955303403aaaefbaf51abc8170-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:23 [async_llm.py:261] Added request cmpl-6fe64b955303403aaaefbaf51abc8170-0.
INFO 03-02 00:26:24 [logger.py:42] Received request cmpl-8c435be6fe944a9eb0210d280cfcbdf0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:24 [async_llm.py:261] Added request cmpl-8c435be6fe944a9eb0210d280cfcbdf0-0.
INFO 03-02 00:26:25 [logger.py:42] Received request cmpl-eff0621e1a704debb0d8dc359c3fc69e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:25 [async_llm.py:261] Added request cmpl-eff0621e1a704debb0d8dc359c3fc69e-0.
INFO 03-02 00:26:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:26:26 [logger.py:42] Received request cmpl-e2f0c852e7d94fb7bf5baa1debd00ce7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:26 [async_llm.py:261] Added request cmpl-e2f0c852e7d94fb7bf5baa1debd00ce7-0.
INFO 03-02 00:26:27 [logger.py:42] Received request cmpl-b53d48fc720c448c9b8de2d2aee09156-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:27 [async_llm.py:261] Added request cmpl-b53d48fc720c448c9b8de2d2aee09156-0.
INFO 03-02 00:26:28 [logger.py:42] Received request cmpl-2bf6cdd1070248be9b0303da154c429d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:28 [async_llm.py:261] Added request cmpl-2bf6cdd1070248be9b0303da154c429d-0.
INFO 03-02 00:26:29 [logger.py:42] Received request cmpl-8ccc3855b3514723829a3d32ea851ee5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:29 [async_llm.py:261] Added request cmpl-8ccc3855b3514723829a3d32ea851ee5-0.
INFO 03-02 00:26:30 [logger.py:42] Received request cmpl-01e4e40dff87402bab3e5468642c6009-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:30 [async_llm.py:261] Added request cmpl-01e4e40dff87402bab3e5468642c6009-0.
INFO 03-02 00:26:32 [logger.py:42] Received request cmpl-233f8c59aa8d4a5dbd2de41c8d83b99f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:32 [async_llm.py:261] Added request cmpl-233f8c59aa8d4a5dbd2de41c8d83b99f-0.
INFO 03-02 00:26:33 [logger.py:42] Received request cmpl-68cb146749b9422397306e4e5f4bacd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:33 [async_llm.py:261] Added request cmpl-68cb146749b9422397306e4e5f4bacd8-0.
INFO 03-02 00:26:34 [logger.py:42] Received request cmpl-d09ead719c8345f3a80a2c519f011055-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:34 [async_llm.py:261] Added request cmpl-d09ead719c8345f3a80a2c519f011055-0.
INFO 03-02 00:26:35 [logger.py:42] Received request cmpl-7f47b791d5de4dab962798595e1105b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:35 [async_llm.py:261] Added request cmpl-7f47b791d5de4dab962798595e1105b9-0.
INFO 03-02 00:26:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:26:36 [logger.py:42] Received request cmpl-981fbc3ddd2740a7b88262e284178357-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:36 [async_llm.py:261] Added request cmpl-981fbc3ddd2740a7b88262e284178357-0.
INFO 03-02 00:26:37 [logger.py:42] Received request cmpl-8ace244ef56940a782bc0c6e6c590c1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:37 [async_llm.py:261] Added request cmpl-8ace244ef56940a782bc0c6e6c590c1b-0.
INFO 03-02 00:26:38 [logger.py:42] Received request cmpl-336cd748aef944059f916aee67db94f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:38 [async_llm.py:261] Added request cmpl-336cd748aef944059f916aee67db94f0-0.
INFO 03-02 00:26:39 [logger.py:42] Received request cmpl-61db56c9dcbd4a8ea50bc9242b9af0d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:39 [async_llm.py:261] Added request cmpl-61db56c9dcbd4a8ea50bc9242b9af0d7-0.
INFO 03-02 00:26:40 [logger.py:42] Received request cmpl-13e776ca420248e195559c55a54abf7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:40 [async_llm.py:261] Added request cmpl-13e776ca420248e195559c55a54abf7a-0.
INFO 03-02 00:26:41 [logger.py:42] Received request cmpl-1938897e7a4847bc99eda29ff43e2102-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:41 [async_llm.py:261] Added request cmpl-1938897e7a4847bc99eda29ff43e2102-0.
INFO 03-02 00:26:42 [logger.py:42] Received request cmpl-088b4904531e4a7c930eae24681539a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:42 [async_llm.py:261] Added request cmpl-088b4904531e4a7c930eae24681539a1-0.
INFO 03-02 00:26:44 [logger.py:42] Received request cmpl-2033222763874a398b22bb30d8671239-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:44 [async_llm.py:261] Added request cmpl-2033222763874a398b22bb30d8671239-0.
INFO 03-02 00:26:45 [logger.py:42] Received request cmpl-143f7d43dbdc433e9af3d2308602c9ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:45 [async_llm.py:261] Added request cmpl-143f7d43dbdc433e9af3d2308602c9ce-0.
INFO 03-02 00:26:46 [logger.py:42] Received request cmpl-80d707ee1c4049a5aa25b78a1b761cb1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:46 [async_llm.py:261] Added request cmpl-80d707ee1c4049a5aa25b78a1b761cb1-0.
INFO 03-02 00:26:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:26:47 [logger.py:42] Received request cmpl-5e3775a65a8f40a79fad191970c0141e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:47 [async_llm.py:261] Added request cmpl-5e3775a65a8f40a79fad191970c0141e-0.
INFO 03-02 00:26:48 [logger.py:42] Received request cmpl-c1eea83020c34e348809da402873c4b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:48 [async_llm.py:261] Added request cmpl-c1eea83020c34e348809da402873c4b2-0.
INFO 03-02 00:26:49 [logger.py:42] Received request cmpl-07b4e65c492a475ba9ec9835c1915e5c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:49 [async_llm.py:261] Added request cmpl-07b4e65c492a475ba9ec9835c1915e5c-0.
INFO 03-02 00:26:50 [logger.py:42] Received request cmpl-7afdf0a8db604c36aeb463c7918aa9dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:50 [async_llm.py:261] Added request cmpl-7afdf0a8db604c36aeb463c7918aa9dc-0.
INFO 03-02 00:26:51 [logger.py:42] Received request cmpl-e6e12dd3474a4386831e6e6b7b48e955-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:51 [async_llm.py:261] Added request cmpl-e6e12dd3474a4386831e6e6b7b48e955-0.
INFO 03-02 00:26:52 [logger.py:42] Received request cmpl-1ca51941eba64b8b83e18dbc261f6131-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:52 [async_llm.py:261] Added request cmpl-1ca51941eba64b8b83e18dbc261f6131-0.
INFO:  1.2.3.4:123 - "POST /v1/completions HTTP/1.1" 404 Not Found
INFO 03-02 00:26:53 [logger.py:42] Received request cmpl-39be1d7b82c14304970dc5d68b7da5da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:53 [async_llm.py:261] Added request cmpl-39be1d7b82c14304970dc5d68b7da5da-0.
INFO 03-02 00:26:54 [logger.py:42] Received request cmpl-6f5bde0aa35a498f9f998a3f9d178cdd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:54 [async_llm.py:261] Added request cmpl-6f5bde0aa35a498f9f998a3f9d178cdd-0.
INFO 03-02 00:26:55 [logger.py:42] Received request cmpl-0b50f4ad07274fa58402c77f07d6dcf0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:55 [async_llm.py:261] Added request cmpl-0b50f4ad07274fa58402c77f07d6dcf0-0.
INFO 03-02 00:26:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:26:57 [logger.py:42] Received request cmpl-9cf01c6146624ed883560b41be8d68d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:57 [async_llm.py:261] Added request cmpl-9cf01c6146624ed883560b41be8d68d7-0.
INFO 03-02 00:26:58 [logger.py:42] Received request cmpl-a068c512cf7e4d86a1d044bfb0e2c35b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:58 [async_llm.py:261] Added request cmpl-a068c512cf7e4d86a1d044bfb0e2c35b-0.
INFO 03-02 00:26:59 [logger.py:42] Received request cmpl-7913b6ceefc64d3582231dcfd1a978b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:59 [async_llm.py:261] Added request cmpl-7913b6ceefc64d3582231dcfd1a978b2-0.
INFO 03-02 00:27:00 [logger.py:42] Received request cmpl-a29ef4b5604c46b0948af4e29b5a5d5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:00 [async_llm.py:261] Added request cmpl-a29ef4b5604c46b0948af4e29b5a5d5d-0.
INFO 03-02 00:27:01 [logger.py:42] Received request cmpl-20971e13248e47f98e8435defa5d035d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:01 [async_llm.py:261] Added request cmpl-20971e13248e47f98e8435defa5d035d-0.
INFO 03-02 00:27:02 [logger.py:42] Received request cmpl-3b4cceedaa5e446ea08026e32550f3d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:02 [async_llm.py:261] Added request cmpl-3b4cceedaa5e446ea08026e32550f3d3-0.
INFO 03-02 00:27:03 [logger.py:42] Received request cmpl-5b0fc1fb931240e894c7aa938a390490-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:03 [async_llm.py:261] Added request cmpl-5b0fc1fb931240e894c7aa938a390490-0.
INFO 03-02 00:27:04 [logger.py:42] Received request cmpl-2045b251de96464ab388fb72111be5f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:04 [async_llm.py:261] Added request cmpl-2045b251de96464ab388fb72111be5f9-0.
INFO 03-02 00:27:05 [logger.py:42] Received request cmpl-2e6fcc72b80d4449b2144ed583bfe675-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:05 [async_llm.py:261] Added request cmpl-2e6fcc72b80d4449b2144ed583bfe675-0.
INFO 03-02 00:27:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:27:06 [logger.py:42] Received request cmpl-606e0fc264e749a3aacbe55482f3d2f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:06 [async_llm.py:261] Added request cmpl-606e0fc264e749a3aacbe55482f3d2f2-0.
INFO 03-02 00:27:07 [logger.py:42] Received request cmpl-013987026fbf4b349adfdf2e7ee4a263-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:07 [async_llm.py:261] Added request cmpl-013987026fbf4b349adfdf2e7ee4a263-0.
INFO 03-02 00:27:08 [logger.py:42] Received request cmpl-71b838a0711148188056c8b15b719a14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:08 [async_llm.py:261] Added request cmpl-71b838a0711148188056c8b15b719a14-0.
INFO 03-02 00:27:10 [logger.py:42] Received request cmpl-f94df8efeb1c4d8aaae831982836d7ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:10 [async_llm.py:261] Added request cmpl-f94df8efeb1c4d8aaae831982836d7ed-0.
INFO 03-02 00:27:11 [logger.py:42] Received request cmpl-bdb95c165e854c2d8d4f377a7aae5625-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:11 [async_llm.py:261] Added request cmpl-bdb95c165e854c2d8d4f377a7aae5625-0.
INFO 03-02 00:27:12 [logger.py:42] Received request cmpl-ea2a66a5c1c642c5a6821b18e172d3d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:12 [async_llm.py:261] Added request cmpl-ea2a66a5c1c642c5a6821b18e172d3d1-0.
INFO 03-02 00:27:13 [logger.py:42] Received request cmpl-bb2bec23b74844b9acfcd59193002bef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:13 [async_llm.py:261] Added request cmpl-bb2bec23b74844b9acfcd59193002bef-0.
INFO 03-02 00:27:14 [logger.py:42] Received request cmpl-80e8ef22d0c34502bb7e43104f0e120d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:14 [async_llm.py:261] Added request cmpl-80e8ef22d0c34502bb7e43104f0e120d-0.
INFO 03-02 00:27:15 [logger.py:42] Received request cmpl-eaa1209f2476411da516ffe8bb429dd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:15 [async_llm.py:261] Added request cmpl-eaa1209f2476411da516ffe8bb429dd5-0.
INFO 03-02 00:27:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:27:16 [logger.py:42] Received request cmpl-1bff4a96b34f40878ab895e5aeff584e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:16 [async_llm.py:261] Added request cmpl-1bff4a96b34f40878ab895e5aeff584e-0.
INFO 03-02 00:27:17 [logger.py:42] Received request cmpl-ab087852593e404c925cd0733e9bc1f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:17 [async_llm.py:261] Added request cmpl-ab087852593e404c925cd0733e9bc1f8-0.
INFO 03-02 00:27:18 [logger.py:42] Received request cmpl-42e13efc3d5f451fb57f5bee892a1423-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:18 [async_llm.py:261] Added request cmpl-42e13efc3d5f451fb57f5bee892a1423-0.
INFO 03-02 00:27:19 [logger.py:42] Received request cmpl-e75a2b19ff9a499a82fb847d392d4403-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:19 [async_llm.py:261] Added request cmpl-e75a2b19ff9a499a82fb847d392d4403-0.
INFO 03-02 00:27:20 [logger.py:42] Received request cmpl-1c5054d5162d449e8524b424ed70641e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:20 [async_llm.py:261] Added request cmpl-1c5054d5162d449e8524b424ed70641e-0.
INFO 03-02 00:27:21 [logger.py:42] Received request cmpl-6f52f179c49d4ef0bbcfa640f848bcdb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:21 [async_llm.py:261] Added request cmpl-6f52f179c49d4ef0bbcfa640f848bcdb-0.
INFO 03-02 00:27:23 [logger.py:42] Received request cmpl-19b8f66802354ac19c562e35cef182c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:23 [async_llm.py:261] Added request cmpl-19b8f66802354ac19c562e35cef182c2-0.
INFO 03-02 00:27:24 [logger.py:42] Received request cmpl-cc2c878c2bb94a7f997aa9da2e7587a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:24 [async_llm.py:261] Added request cmpl-cc2c878c2bb94a7f997aa9da2e7587a5-0.
INFO 03-02 00:27:25 [logger.py:42] Received request cmpl-a6e0bfd0af17452f8e0db7b7a761fc69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:25 [async_llm.py:261] Added request cmpl-a6e0bfd0af17452f8e0db7b7a761fc69-0.
INFO 03-02 00:27:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:27:26 [logger.py:42] Received request cmpl-d381e855ddc04d97a136a75f36a467ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:26 [async_llm.py:261] Added request cmpl-d381e855ddc04d97a136a75f36a467ce-0.
INFO 03-02 00:27:27 [logger.py:42] Received request cmpl-784ed466b94a48ff9b3bd4e8339155fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:27 [async_llm.py:261] Added request cmpl-784ed466b94a48ff9b3bd4e8339155fe-0.
INFO 03-02 00:27:28 [logger.py:42] Received request cmpl-5f2da83db55d460f99bcdb44ed6a0d41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:28 [async_llm.py:261] Added request cmpl-5f2da83db55d460f99bcdb44ed6a0d41-0.
INFO 03-02 00:27:29 [logger.py:42] Received request cmpl-1f9eea51b8cf4326931ea46fb4f41426-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:29 [async_llm.py:261] Added request cmpl-1f9eea51b8cf4326931ea46fb4f41426-0.
INFO 03-02 00:27:30 [logger.py:42] Received request cmpl-454974f542654afdb9bbdb2a97298bf0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:30 [async_llm.py:261] Added request cmpl-454974f542654afdb9bbdb2a97298bf0-0.
INFO 03-02 00:27:31 [logger.py:42] Received request cmpl-f6e3498f27be47cb87ad25fb8c87a48e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:31 [async_llm.py:261] Added request cmpl-f6e3498f27be47cb87ad25fb8c87a48e-0.
INFO 03-02 00:27:32 [logger.py:42] Received request cmpl-5ae2a61cc6f04619ad6673dd981fdc5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:32 [async_llm.py:261] Added request cmpl-5ae2a61cc6f04619ad6673dd981fdc5e-0.
INFO 03-02 00:27:33 [logger.py:42] Received request cmpl-d636550f9eca4b1fa7620d451be787cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:33 [async_llm.py:261] Added request cmpl-d636550f9eca4b1fa7620d451be787cf-0.
INFO 03-02 00:27:34 [logger.py:42] Received request cmpl-1801ecb7601648d9b35d3298fdec0889-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:34 [async_llm.py:261] Added request cmpl-1801ecb7601648d9b35d3298fdec0889-0.
INFO 03-02 00:27:36 [logger.py:42] Received request cmpl-697627b69ffd474d801bd15942463b5c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:36 [async_llm.py:261] Added request cmpl-697627b69ffd474d801bd15942463b5c-0.
INFO 03-02 00:27:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:27:37 [logger.py:42] Received request cmpl-48b1f0cfd44b4fa1a1b43ac9e7be0be1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:37 [async_llm.py:261] Added request cmpl-48b1f0cfd44b4fa1a1b43ac9e7be0be1-0.
INFO 03-02 00:27:38 [logger.py:42] Received request cmpl-b0555787ff81435ba09c43bbca866749-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:38 [async_llm.py:261] Added request cmpl-b0555787ff81435ba09c43bbca866749-0.
INFO 03-02 00:27:39 [logger.py:42] Received request cmpl-58ab7d0bd63449f8ba2f075b8077aedc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:39 [async_llm.py:261] Added request cmpl-58ab7d0bd63449f8ba2f075b8077aedc-0.
INFO 03-02 00:27:40 [logger.py:42] Received request cmpl-1cfea68c824343299173e605d477eb99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:40 [async_llm.py:261] Added request cmpl-1cfea68c824343299173e605d477eb99-0.
INFO 03-02 00:27:41 [logger.py:42] Received request cmpl-83f0ebfe3be540d2b24e438ca80d4376-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:41 [async_llm.py:261] Added request cmpl-83f0ebfe3be540d2b24e438ca80d4376-0.
INFO 03-02 00:27:42 [logger.py:42] Received request cmpl-c602c24238bd41efa45cb0b2d10dae5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:42 [async_llm.py:261] Added request cmpl-c602c24238bd41efa45cb0b2d10dae5f-0.
INFO 03-02 00:27:43 [logger.py:42] Received request cmpl-e0f2269c431c422da9c1760afa09c3e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:43 [async_llm.py:261] Added request cmpl-e0f2269c431c422da9c1760afa09c3e2-0.
INFO 03-02 00:27:44 [logger.py:42] Received request cmpl-0ad6d5590ab246c7b96dc52faac98ddd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:44 [async_llm.py:261] Added request cmpl-0ad6d5590ab246c7b96dc52faac98ddd-0.
INFO 03-02 00:27:45 [logger.py:42] Received request cmpl-c3b2b9bb048a4593a6908b6e14f271a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:45 [async_llm.py:261] Added request cmpl-c3b2b9bb048a4593a6908b6e14f271a0-0.
INFO 03-02 00:27:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:27:46 [logger.py:42] Received request cmpl-c52fe1d9c46846f08ffe3e8cb3f98436-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:46 [async_llm.py:261] Added request cmpl-c52fe1d9c46846f08ffe3e8cb3f98436-0.
INFO 03-02 00:27:47 [logger.py:42] Received request cmpl-75196b88328a4614a4ce8aa32efcd444-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:47 [async_llm.py:261] Added request cmpl-75196b88328a4614a4ce8aa32efcd444-0.
INFO 03-02 00:27:49 [logger.py:42] Received request cmpl-5e6d6a4870064c4291a79ea0cc5613a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:49 [async_llm.py:261] Added request cmpl-5e6d6a4870064c4291a79ea0cc5613a6-0.
INFO 03-02 00:27:50 [logger.py:42] Received request cmpl-7a4baf7f61cd4526abf0fa72d892f6b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:50 [async_llm.py:261] Added request cmpl-7a4baf7f61cd4526abf0fa72d892f6b4-0.
INFO 03-02 00:27:51 [logger.py:42] Received request cmpl-5fc0c977a3f04a6b899de0d3e060fa56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:51 [async_llm.py:261] Added request cmpl-5fc0c977a3f04a6b899de0d3e060fa56-0.
INFO 03-02 00:27:52 [logger.py:42] Received request cmpl-179df249ed4945c686847eb3daf07f06-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:52 [async_llm.py:261] Added request cmpl-179df249ed4945c686847eb3daf07f06-0.
INFO 03-02 00:27:53 [logger.py:42] Received request cmpl-11b6b0a161124f22a17728ce251714fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:53 [async_llm.py:261] Added request cmpl-11b6b0a161124f22a17728ce251714fc-0.
INFO 03-02 00:27:54 [logger.py:42] Received request cmpl-3b961a7f479540dc8999a655f1a9a3c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:54 [async_llm.py:261] Added request cmpl-3b961a7f479540dc8999a655f1a9a3c0-0.
INFO 03-02 00:27:55 [logger.py:42] Received request cmpl-902ba698d4be4539b65244d2d31b6cb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:55 [async_llm.py:261] Added request cmpl-902ba698d4be4539b65244d2d31b6cb8-0.
INFO 03-02 00:27:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:27:56 [logger.py:42] Received request cmpl-244dc87077864ca5a9a2a16f4ef7b563-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:56 [async_llm.py:261] Added request cmpl-244dc87077864ca5a9a2a16f4ef7b563-0.
INFO 03-02 00:27:57 [logger.py:42] Received request cmpl-c6bfc5ced918485e86639b41fabd6872-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:57 [async_llm.py:261] Added request cmpl-c6bfc5ced918485e86639b41fabd6872-0.
INFO 03-02 00:27:58 [logger.py:42] Received request cmpl-4ecfe8af9c89421c9934b07d7b1e389b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:58 [async_llm.py:261] Added request cmpl-4ecfe8af9c89421c9934b07d7b1e389b-0.
INFO 03-02 00:27:59 [logger.py:42] Received request cmpl-8c4a8f27383c4f3a96dc9af69c8fca7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:59 [async_llm.py:261] Added request cmpl-8c4a8f27383c4f3a96dc9af69c8fca7b-0.
INFO 03-02 00:28:00 [logger.py:42] Received request cmpl-ece4039726464a4fb127b62c2c78caa3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:00 [async_llm.py:261] Added request cmpl-ece4039726464a4fb127b62c2c78caa3-0.
INFO 03-02 00:28:02 [logger.py:42] Received request cmpl-712b7917e57448baa9d581126f466f80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:02 [async_llm.py:261] Added request cmpl-712b7917e57448baa9d581126f466f80-0.
INFO 03-02 00:28:03 [logger.py:42] Received request cmpl-3ad7547b50934c5abf3a145996e2cac9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:03 [async_llm.py:261] Added request cmpl-3ad7547b50934c5abf3a145996e2cac9-0.
INFO 03-02 00:28:04 [logger.py:42] Received request cmpl-4e9187d586ca41f9a645b1daaba8e71f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:04 [async_llm.py:261] Added request cmpl-4e9187d586ca41f9a645b1daaba8e71f-0.
INFO 03-02 00:28:05 [logger.py:42] Received request cmpl-f480ec4ba77941249da1b4eb533f363c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:05 [async_llm.py:261] Added request cmpl-f480ec4ba77941249da1b4eb533f363c-0.
INFO 03-02 00:28:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:28:06 [logger.py:42] Received request cmpl-c9102453b4bd425099406f3403d60053-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:06 [async_llm.py:261] Added request cmpl-c9102453b4bd425099406f3403d60053-0.
INFO 03-02 00:28:07 [logger.py:42] Received request cmpl-8ac27de8c4ea4b909852c46aa8a0f2d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:07 [async_llm.py:261] Added request cmpl-8ac27de8c4ea4b909852c46aa8a0f2d6-0.
INFO 03-02 00:28:08 [logger.py:42] Received request cmpl-5f0730221f3545e4a00c7e3155197c1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:08 [async_llm.py:261] Added request cmpl-5f0730221f3545e4a00c7e3155197c1d-0.
INFO 03-02 00:28:09 [logger.py:42] Received request cmpl-0cd98fe9543d45929d03c9aafe5451a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:09 [async_llm.py:261] Added request cmpl-0cd98fe9543d45929d03c9aafe5451a8-0.
INFO 03-02 00:28:10 [logger.py:42] Received request cmpl-512683ef81f64e448955a5604035b501-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:10 [async_llm.py:261] Added request cmpl-512683ef81f64e448955a5604035b501-0.
INFO 03-02 00:28:11 [logger.py:42] Received request cmpl-00f0290fe95b4b87a52b1af9fd1cb731-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:11 [async_llm.py:261] Added request cmpl-00f0290fe95b4b87a52b1af9fd1cb731-0.
INFO 03-02 00:28:12 [logger.py:42] Received request cmpl-fa1b9125343c4fb592dd4e03eb91de77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:12 [async_llm.py:261] Added request cmpl-fa1b9125343c4fb592dd4e03eb91de77-0.
INFO 03-02 00:28:13 [logger.py:42] Received request cmpl-cd01a42980d94d0e99900f124f045f64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:13 [async_llm.py:261] Added request cmpl-cd01a42980d94d0e99900f124f045f64-0.
INFO 03-02 00:28:15 [logger.py:42] Received request cmpl-d177eedcf7e44e26ad903f087a57c095-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:15 [async_llm.py:261] Added request cmpl-d177eedcf7e44e26ad903f087a57c095-0.
INFO 03-02 00:28:16 [logger.py:42] Received request cmpl-1dcde935d6f6429aa30174ebde707e1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:16 [async_llm.py:261] Added request cmpl-1dcde935d6f6429aa30174ebde707e1a-0.
INFO 03-02 00:28:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:28:17 [logger.py:42] Received request cmpl-98712ac2138848d8bb1fa9c904711aaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:17 [async_llm.py:261] Added request cmpl-98712ac2138848d8bb1fa9c904711aaf-0.
INFO 03-02 00:28:18 [logger.py:42] Received request cmpl-ae58a0a924db442f946e3ca007972052-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:18 [async_llm.py:261] Added request cmpl-ae58a0a924db442f946e3ca007972052-0.
INFO 03-02 00:28:19 [logger.py:42] Received request cmpl-f07b5df23e6e4a5eb41ce77d5f5e893a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:19 [async_llm.py:261] Added request cmpl-f07b5df23e6e4a5eb41ce77d5f5e893a-0.
INFO 03-02 00:28:20 [logger.py:42] Received request cmpl-d8794aed426945b19a320bdf2545220e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:20 [async_llm.py:261] Added request cmpl-d8794aed426945b19a320bdf2545220e-0.
INFO 03-02 00:28:21 [logger.py:42] Received request cmpl-b54e434c70b64a34b49efe0bbd34b18c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:21 [async_llm.py:261] Added request cmpl-b54e434c70b64a34b49efe0bbd34b18c-0.
INFO 03-02 00:28:22 [logger.py:42] Received request cmpl-218651f22ba140218f86610bffcd46a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:22 [async_llm.py:261] Added request cmpl-218651f22ba140218f86610bffcd46a5-0.
INFO 03-02 00:28:23 [logger.py:42] Received request cmpl-ca6d4d7c4a934ba68986095cc4bf1afa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:23 [async_llm.py:261] Added request cmpl-ca6d4d7c4a934ba68986095cc4bf1afa-0.
INFO 03-02 00:28:24 [logger.py:42] Received request cmpl-991351bf001241d8b63f5a2edddeded8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:24 [async_llm.py:261] Added request cmpl-991351bf001241d8b63f5a2edddeded8-0.
INFO 03-02 00:28:25 [logger.py:42] Received request cmpl-a0e232a29e1c4a3abe6b8ffbb8d353b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:25 [async_llm.py:261] Added request cmpl-a0e232a29e1c4a3abe6b8ffbb8d353b7-0.
INFO 03-02 00:28:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:28:26 [logger.py:42] Received request cmpl-69b33fc3235442d9ba643adfbf57b72c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:26 [async_llm.py:261] Added request cmpl-69b33fc3235442d9ba643adfbf57b72c-0.
INFO 03-02 00:28:28 [logger.py:42] Received request cmpl-16fc0a7e1fe84cfbb6263f39db2bdff0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:28 [async_llm.py:261] Added request cmpl-16fc0a7e1fe84cfbb6263f39db2bdff0-0.
INFO 03-02 00:28:29 [logger.py:42] Received request cmpl-3d592cfdb7084e86b20c70f81a47699c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:29 [async_llm.py:261] Added request cmpl-3d592cfdb7084e86b20c70f81a47699c-0.
INFO 03-02 00:28:30 [logger.py:42] Received request cmpl-d498c7320c164268acbb5d93d0c341af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:30 [async_llm.py:261] Added request cmpl-d498c7320c164268acbb5d93d0c341af-0.
INFO 03-02 00:28:31 [logger.py:42] Received request cmpl-2a7bfc886a3a422d891f976be4da40bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:31 [async_llm.py:261] Added request cmpl-2a7bfc886a3a422d891f976be4da40bb-0.
INFO 03-02 00:28:32 [logger.py:42] Received request cmpl-65efb3a218b74fa7859cdf0a1c70b69a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:32 [async_llm.py:261] Added request cmpl-65efb3a218b74fa7859cdf0a1c70b69a-0.
INFO 03-02 00:28:33 [logger.py:42] Received request cmpl-708cfb38dda54b208f633542c29f76c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:33 [async_llm.py:261] Added request cmpl-708cfb38dda54b208f633542c29f76c6-0.
INFO 03-02 00:28:34 [logger.py:42] Received request cmpl-0e2dca8b6f1f414db1e7eb607b4a6ae6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:34 [async_llm.py:261] Added request cmpl-0e2dca8b6f1f414db1e7eb607b4a6ae6-0.
INFO 03-02 00:28:35 [logger.py:42] Received request cmpl-fd1e326186d64e1b92c10b6181748a7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:35 [async_llm.py:261] Added request cmpl-fd1e326186d64e1b92c10b6181748a7d-0.
INFO 03-02 00:28:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:28:36 [logger.py:42] Received request cmpl-e5bab0b94063429b88f5f91cabe3d6fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:36 [async_llm.py:261] Added request cmpl-e5bab0b94063429b88f5f91cabe3d6fe-0.
INFO 03-02 00:28:37 [logger.py:42] Received request cmpl-3ac244036ea54e6bb9b72ad5c98d6325-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:37 [async_llm.py:261] Added request cmpl-3ac244036ea54e6bb9b72ad5c98d6325-0.
INFO 03-02 00:28:38 [logger.py:42] Received request cmpl-9fff5026efc043c18f1730d6e0075303-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:38 [async_llm.py:261] Added request cmpl-9fff5026efc043c18f1730d6e0075303-0.
INFO 03-02 00:28:39 [logger.py:42] Received request cmpl-1583b54aae5c4b2894254669c5136187-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:39 [async_llm.py:261] Added request cmpl-1583b54aae5c4b2894254669c5136187-0.
INFO 03-02 00:28:41 [logger.py:42] Received request cmpl-4153d1d72c9841379cbf8ff649c9c044-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:41 [async_llm.py:261] Added request cmpl-4153d1d72c9841379cbf8ff649c9c044-0.
INFO 03-02 00:28:42 [logger.py:42] Received request cmpl-a07291f14379400bbf91acc58bb81d01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:42 [async_llm.py:261] Added request cmpl-a07291f14379400bbf91acc58bb81d01-0.
INFO 03-02 00:28:43 [logger.py:42] Received request cmpl-7a4fe8abd009499593ccae02227fa1bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:43 [async_llm.py:261] Added request cmpl-7a4fe8abd009499593ccae02227fa1bc-0.
INFO 03-02 00:28:44 [logger.py:42] Received request cmpl-0980638364b1452a9b9475f536be4ea8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:44 [async_llm.py:261] Added request cmpl-0980638364b1452a9b9475f536be4ea8-0.
INFO 03-02 00:28:45 [logger.py:42] Received request cmpl-49052a8c912843e381ab5371f973939f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:45 [async_llm.py:261] Added request cmpl-49052a8c912843e381ab5371f973939f-0.
INFO 03-02 00:28:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:28:46 [logger.py:42] Received request cmpl-9ba9807ae26d4b2e9c4d3206392a5a1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:46 [async_llm.py:261] Added request cmpl-9ba9807ae26d4b2e9c4d3206392a5a1c-0.
INFO 03-02 00:28:47 [logger.py:42] Received request cmpl-0405ea9069da488d8bcce60f4bbb3792-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:47 [async_llm.py:261] Added request cmpl-0405ea9069da488d8bcce60f4bbb3792-0.
INFO 03-02 00:28:48 [logger.py:42] Received request cmpl-e52d2eaf13174e15a7c3491accfb55b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:48 [async_llm.py:261] Added request cmpl-e52d2eaf13174e15a7c3491accfb55b3-0.
INFO 03-02 00:28:49 [logger.py:42] Received request cmpl-00ab7caef77d41989063a4d723596138-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:49 [async_llm.py:261] Added request cmpl-00ab7caef77d41989063a4d723596138-0.
INFO 03-02 00:28:50 [logger.py:42] Received request cmpl-ec43cbddb97247c099838239d47bc1a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:50 [async_llm.py:261] Added request cmpl-ec43cbddb97247c099838239d47bc1a8-0.
INFO 03-02 00:28:51 [logger.py:42] Received request cmpl-9a2885945abd4431b751193f6bc7362c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:51 [async_llm.py:261] Added request cmpl-9a2885945abd4431b751193f6bc7362c-0.
INFO 03-02 00:28:52 [logger.py:42] Received request cmpl-0a505db7c40b485f8741e026d2501297-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:52 [async_llm.py:261] Added request cmpl-0a505db7c40b485f8741e026d2501297-0.
INFO 03-02 00:28:54 [logger.py:42] Received request cmpl-bc2d8cd25c894c14a4dc21bf71e2a408-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:54 [async_llm.py:261] Added request cmpl-bc2d8cd25c894c14a4dc21bf71e2a408-0.
INFO 03-02 00:28:55 [logger.py:42] Received request cmpl-24bdb1c9c6a0492ba106fd3d61ad1e77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:55 [async_llm.py:261] Added request cmpl-24bdb1c9c6a0492ba106fd3d61ad1e77-0.
INFO 03-02 00:28:56 [logger.py:42] Received request cmpl-a2b48354360b47cb8a2bbbb26d4873f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:56 [async_llm.py:261] Added request cmpl-a2b48354360b47cb8a2bbbb26d4873f6-0.
INFO 03-02 00:28:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:28:57 [logger.py:42] Received request cmpl-8982c81fa2544a5f8495ca12596b0836-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:57 [async_llm.py:261] Added request cmpl-8982c81fa2544a5f8495ca12596b0836-0.
INFO 03-02 00:28:58 [logger.py:42] Received request cmpl-f7bc947231f2457c9647c5cce7db9d45-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:58 [async_llm.py:261] Added request cmpl-f7bc947231f2457c9647c5cce7db9d45-0.
INFO 03-02 00:28:59 [logger.py:42] Received request cmpl-f8b26c4349c8472b9e01e15ee6383c06-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:59 [async_llm.py:261] Added request cmpl-f8b26c4349c8472b9e01e15ee6383c06-0.
INFO 03-02 00:29:00 [logger.py:42] Received request cmpl-dce95913a3084cc39be1c61c3377a0b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:00 [async_llm.py:261] Added request cmpl-dce95913a3084cc39be1c61c3377a0b4-0.
INFO 03-02 00:29:01 [logger.py:42] Received request cmpl-48cc2a9b81c0435cbe0bda3f7b762656-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:01 [async_llm.py:261] Added request cmpl-48cc2a9b81c0435cbe0bda3f7b762656-0.
INFO 03-02 00:29:02 [logger.py:42] Received request cmpl-1662bc8bc3fe4ab38ce074d1ebeef951-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:02 [async_llm.py:261] Added request cmpl-1662bc8bc3fe4ab38ce074d1ebeef951-0.
INFO 03-02 00:29:03 [logger.py:42] Received request cmpl-fa4992cb3fd7411d83388dfbadf9ce82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:03 [async_llm.py:261] Added request cmpl-fa4992cb3fd7411d83388dfbadf9ce82-0.
INFO 03-02 00:29:04 [logger.py:42] Received request cmpl-7e4c5781c4ef448b99dd292a651fd35b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:04 [async_llm.py:261] Added request cmpl-7e4c5781c4ef448b99dd292a651fd35b-0.
INFO 03-02 00:29:06 [logger.py:42] Received request cmpl-41ff85ec42b24fa8b8358d7e097a705e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:06 [async_llm.py:261] Added request cmpl-41ff85ec42b24fa8b8358d7e097a705e-0.
INFO 03-02 00:29:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:29:07 [logger.py:42] Received request cmpl-21ab8f5ac40642e3b1f635a88d148243-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:07 [async_llm.py:261] Added request cmpl-21ab8f5ac40642e3b1f635a88d148243-0.
INFO 03-02 00:29:08 [logger.py:42] Received request cmpl-4cfd568e1dc14df8946dcfab3f237c2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:08 [async_llm.py:261] Added request cmpl-4cfd568e1dc14df8946dcfab3f237c2f-0.
INFO 03-02 00:29:09 [logger.py:42] Received request cmpl-5701a84858974ae4ac35170bc3cd0c42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:09 [async_llm.py:261] Added request cmpl-5701a84858974ae4ac35170bc3cd0c42-0.
INFO 03-02 00:29:10 [logger.py:42] Received request cmpl-fdfa2eca128a4d39bfb17b257228b673-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:10 [async_llm.py:261] Added request cmpl-fdfa2eca128a4d39bfb17b257228b673-0.
INFO 03-02 00:29:11 [logger.py:42] Received request cmpl-71b9c3d4300445068ce99ee0c71300ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:11 [async_llm.py:261] Added request cmpl-71b9c3d4300445068ce99ee0c71300ad-0.
INFO 03-02 00:29:12 [logger.py:42] Received request cmpl-36e6ff56424d4520956921a324620658-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:12 [async_llm.py:261] Added request cmpl-36e6ff56424d4520956921a324620658-0.
INFO 03-02 00:29:13 [logger.py:42] Received request cmpl-100cfa5923c24e6496a3dae294a00dd0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:13 [async_llm.py:261] Added request cmpl-100cfa5923c24e6496a3dae294a00dd0-0.
INFO 03-02 00:29:14 [logger.py:42] Received request cmpl-a569ed5a3e3b49cd9dfea7debdef91db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:14 [async_llm.py:261] Added request cmpl-a569ed5a3e3b49cd9dfea7debdef91db-0.
INFO 03-02 00:29:15 [logger.py:42] Received request cmpl-1b5ce15cd3594141bc312279f6acf80b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:15 [async_llm.py:261] Added request cmpl-1b5ce15cd3594141bc312279f6acf80b-0.
INFO 03-02 00:29:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:29:16 [logger.py:42] Received request cmpl-9f4c2f9ec57340f2b8764078f64f7190-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:16 [async_llm.py:261] Added request cmpl-9f4c2f9ec57340f2b8764078f64f7190-0.
INFO 03-02 00:29:17 [logger.py:42] Received request cmpl-c6ba4b355dad420ca174a195c3fe00a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:17 [async_llm.py:261] Added request cmpl-c6ba4b355dad420ca174a195c3fe00a8-0.
INFO 03-02 00:29:19 [logger.py:42] Received request cmpl-1e5514ae68084635bfd6a5b6bb7da794-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:19 [async_llm.py:261] Added request cmpl-1e5514ae68084635bfd6a5b6bb7da794-0.
INFO 03-02 00:29:20 [logger.py:42] Received request cmpl-3d4093a5de4c48a4868e01bda3012298-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:20 [async_llm.py:261] Added request cmpl-3d4093a5de4c48a4868e01bda3012298-0.
INFO 03-02 00:29:21 [logger.py:42] Received request cmpl-367a4814ccab4305a9640f6e0ceac824-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:21 [async_llm.py:261] Added request cmpl-367a4814ccab4305a9640f6e0ceac824-0.
INFO 03-02 00:29:22 [logger.py:42] Received request cmpl-7a49ec3bfc334f628076788de0f690ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:22 [async_llm.py:261] Added request cmpl-7a49ec3bfc334f628076788de0f690ed-0.
INFO 03-02 00:29:23 [logger.py:42] Received request cmpl-a013ca37b0a34543825a5b34e3540580-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:23 [async_llm.py:261] Added request cmpl-a013ca37b0a34543825a5b34e3540580-0.
INFO 03-02 00:29:24 [logger.py:42] Received request cmpl-1a455989564c4b569808de3d71ecf007-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:24 [async_llm.py:261] Added request cmpl-1a455989564c4b569808de3d71ecf007-0.
INFO 03-02 00:29:25 [logger.py:42] Received request cmpl-6311e64ea617455db9a31029228c1ec8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:25 [async_llm.py:261] Added request cmpl-6311e64ea617455db9a31029228c1ec8-0.
INFO 03-02 00:29:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:29:26 [logger.py:42] Received request cmpl-12e9d9afdbdb4f4781af3e40c241bacf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:26 [async_llm.py:261] Added request cmpl-12e9d9afdbdb4f4781af3e40c241bacf-0.
INFO 03-02 00:29:27 [logger.py:42] Received request cmpl-be0bed25032b44fc99ed0cce097e92ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:27 [async_llm.py:261] Added request cmpl-be0bed25032b44fc99ed0cce097e92ba-0.
INFO 03-02 00:29:28 [logger.py:42] Received request cmpl-7c62fab8e5b949ffaf9c067f57efccc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:28 [async_llm.py:261] Added request cmpl-7c62fab8e5b949ffaf9c067f57efccc3-0.
INFO 03-02 00:29:29 [logger.py:42] Received request cmpl-fd598386108e4bd4bec7fc4fa2e24c4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:29 [async_llm.py:261] Added request cmpl-fd598386108e4bd4bec7fc4fa2e24c4d-0.
INFO 03-02 00:29:30 [logger.py:42] Received request cmpl-058a020200214913bf581097d38656bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:30 [async_llm.py:261] Added request cmpl-058a020200214913bf581097d38656bc-0.
INFO 03-02 00:29:32 [logger.py:42] Received request cmpl-8de3653c27d649f29911f95cdd47b99b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:32 [async_llm.py:261] Added request cmpl-8de3653c27d649f29911f95cdd47b99b-0.
INFO 03-02 00:29:33 [logger.py:42] Received request cmpl-0f1860dc96d54ce4b3ebd9fdbbb7a751-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:33 [async_llm.py:261] Added request cmpl-0f1860dc96d54ce4b3ebd9fdbbb7a751-0.
INFO 03-02 00:29:34 [logger.py:42] Received request cmpl-f51ed0b57b8246f7a3c93f12a1849d4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:34 [async_llm.py:261] Added request cmpl-f51ed0b57b8246f7a3c93f12a1849d4c-0.
INFO 03-02 00:29:35 [logger.py:42] Received request cmpl-3e028837b3294d6e98455b130046477e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:35 [async_llm.py:261] Added request cmpl-3e028837b3294d6e98455b130046477e-0.
INFO 03-02 00:29:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:29:36 [logger.py:42] Received request cmpl-cc7e790083d2414ab30d9bd87d8b8ca1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:36 [async_llm.py:261] Added request cmpl-cc7e790083d2414ab30d9bd87d8b8ca1-0.
INFO 03-02 00:29:37 [logger.py:42] Received request cmpl-eabd5a13239d49e1bf571cbdb4cdfcd0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:37 [async_llm.py:261] Added request cmpl-eabd5a13239d49e1bf571cbdb4cdfcd0-0.
INFO 03-02 00:29:38 [logger.py:42] Received request cmpl-82caf4dac94744be8645a572ff0062e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:38 [async_llm.py:261] Added request cmpl-82caf4dac94744be8645a572ff0062e5-0.
INFO 03-02 00:29:39 [logger.py:42] Received request cmpl-6c0ee55fd0124b47b6562038ed74a21b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:39 [async_llm.py:261] Added request cmpl-6c0ee55fd0124b47b6562038ed74a21b-0.
INFO 03-02 00:29:40 [logger.py:42] Received request cmpl-58c5cdb4f2b842b7bc3cfc82471d0a12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:40 [async_llm.py:261] Added request cmpl-58c5cdb4f2b842b7bc3cfc82471d0a12-0.
INFO 03-02 00:29:41 [logger.py:42] Received request cmpl-83c0a42ae76e4164a7e33cae17207bc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:41 [async_llm.py:261] Added request cmpl-83c0a42ae76e4164a7e33cae17207bc9-0.
INFO 03-02 00:29:42 [logger.py:42] Received request cmpl-02cb6352c99f447799a06c0887b5d067-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:42 [async_llm.py:261] Added request cmpl-02cb6352c99f447799a06c0887b5d067-0.
INFO 03-02 00:29:43 [logger.py:42] Received request cmpl-d2bb1abf719a4509bc3fd6ba4148deca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:43 [async_llm.py:261] Added request cmpl-d2bb1abf719a4509bc3fd6ba4148deca-0.
INFO 03-02 00:29:45 [logger.py:42] Received request cmpl-ab9fd9688f784ef3954fbf86fc81360e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:45 [async_llm.py:261] Added request cmpl-ab9fd9688f784ef3954fbf86fc81360e-0.
INFO 03-02 00:29:46 [logger.py:42] Received request cmpl-309b12277d7948acba5db9277b792551-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:46 [async_llm.py:261] Added request cmpl-309b12277d7948acba5db9277b792551-0.
INFO 03-02 00:29:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:29:47 [logger.py:42] Received request cmpl-b8e67fb94d9b4af58f4939ef5de4f83b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:47 [async_llm.py:261] Added request cmpl-b8e67fb94d9b4af58f4939ef5de4f83b-0.
INFO 03-02 00:29:48 [logger.py:42] Received request cmpl-84398c7aa3f2464f8d459072a85a414c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:48 [async_llm.py:261] Added request cmpl-84398c7aa3f2464f8d459072a85a414c-0.
INFO 03-02 00:29:49 [logger.py:42] Received request cmpl-5fce22b907264089a04feede47f796d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:49 [async_llm.py:261] Added request cmpl-5fce22b907264089a04feede47f796d7-0.
INFO 03-02 00:29:50 [logger.py:42] Received request cmpl-a4434bd5e567423081659c2197d0076d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:50 [async_llm.py:261] Added request cmpl-a4434bd5e567423081659c2197d0076d-0.
INFO 03-02 00:29:51 [logger.py:42] Received request cmpl-50a6a400af8f465aae2275866dc57353-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:51 [async_llm.py:261] Added request cmpl-50a6a400af8f465aae2275866dc57353-0.
INFO 03-02 00:29:52 [logger.py:42] Received request cmpl-966aa0c6026d44d4a09bcb1e09989cd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:52 [async_llm.py:261] Added request cmpl-966aa0c6026d44d4a09bcb1e09989cd8-0.
INFO 03-02 00:29:53 [logger.py:42] Received request cmpl-c530a1c54d2b4517afcc2cafe0ec674a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:53 [async_llm.py:261] Added request cmpl-c530a1c54d2b4517afcc2cafe0ec674a-0.
INFO 03-02 00:29:54 [logger.py:42] Received request cmpl-2b190015e95b4d0189dc769c25e86069-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:54 [async_llm.py:261] Added request cmpl-2b190015e95b4d0189dc769c25e86069-0.
INFO 03-02 00:29:55 [logger.py:42] Received request cmpl-7166e0ba6a36425f9afabaeadad2d0a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:55 [async_llm.py:261] Added request cmpl-7166e0ba6a36425f9afabaeadad2d0a0-0.
INFO 03-02 00:29:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:29:56 [logger.py:42] Received request cmpl-325f3ea5d83c480eb68c24a9cde3eaa9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:56 [async_llm.py:261] Added request cmpl-325f3ea5d83c480eb68c24a9cde3eaa9-0.
INFO 03-02 00:29:58 [logger.py:42] Received request cmpl-75d375e272a6488ab69c3b9998280caa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:58 [async_llm.py:261] Added request cmpl-75d375e272a6488ab69c3b9998280caa-0.
INFO 03-02 00:29:59 [logger.py:42] Received request cmpl-b7c0fdc49b794e0eb8b8ff0224f71b7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:59 [async_llm.py:261] Added request cmpl-b7c0fdc49b794e0eb8b8ff0224f71b7c-0.
INFO 03-02 00:30:00 [logger.py:42] Received request cmpl-dfd92432438f4f73976cd0b6768f8389-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:00 [async_llm.py:261] Added request cmpl-dfd92432438f4f73976cd0b6768f8389-0.
INFO 03-02 00:30:01 [logger.py:42] Received request cmpl-c7a6ee54cfdc46f8955b8083e99631af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:01 [async_llm.py:261] Added request cmpl-c7a6ee54cfdc46f8955b8083e99631af-0.
INFO 03-02 00:30:02 [logger.py:42] Received request cmpl-c6ddd276f6974fcf947a80fa5c4fa4cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:02 [async_llm.py:261] Added request cmpl-c6ddd276f6974fcf947a80fa5c4fa4cf-0.
INFO 03-02 00:30:03 [logger.py:42] Received request cmpl-fb9ed9cc91cb40b9b01f8e2ffa51fca6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:03 [async_llm.py:261] Added request cmpl-fb9ed9cc91cb40b9b01f8e2ffa51fca6-0.
INFO 03-02 00:30:04 [logger.py:42] Received request cmpl-8187359fa46c4100a9eec2402c4eec04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:04 [async_llm.py:261] Added request cmpl-8187359fa46c4100a9eec2402c4eec04-0.
INFO 03-02 00:30:05 [logger.py:42] Received request cmpl-ad81c29f94484e3486eea63a2f7ab84b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:05 [async_llm.py:261] Added request cmpl-ad81c29f94484e3486eea63a2f7ab84b-0.
INFO 03-02 00:30:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:30:06 [logger.py:42] Received request cmpl-d1902d36d656484db871b2d815c78eef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:06 [async_llm.py:261] Added request cmpl-d1902d36d656484db871b2d815c78eef-0.
INFO 03-02 00:30:07 [logger.py:42] Received request cmpl-4897e44c64cd4cf793de65eec78c29de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:07 [async_llm.py:261] Added request cmpl-4897e44c64cd4cf793de65eec78c29de-0.
INFO 03-02 00:30:08 [logger.py:42] Received request cmpl-efaedc1b5de84624aa3d821b3dd38c04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:08 [async_llm.py:261] Added request cmpl-efaedc1b5de84624aa3d821b3dd38c04-0.
INFO 03-02 00:30:09 [logger.py:42] Received request cmpl-efc3a9dc698842369def6ea8523cb2eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:09 [async_llm.py:261] Added request cmpl-efc3a9dc698842369def6ea8523cb2eb-0.
INFO 03-02 00:30:11 [logger.py:42] Received request cmpl-bd1b1f871e324836a56f09298c00f79d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:11 [async_llm.py:261] Added request cmpl-bd1b1f871e324836a56f09298c00f79d-0.
INFO 03-02 00:30:12 [logger.py:42] Received request cmpl-85060029d4e447c9bcff99b311846fc8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:12 [async_llm.py:261] Added request cmpl-85060029d4e447c9bcff99b311846fc8-0.
INFO 03-02 00:30:13 [logger.py:42] Received request cmpl-b86922ae69704c6d8de963199c9d8441-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:13 [async_llm.py:261] Added request cmpl-b86922ae69704c6d8de963199c9d8441-0.
INFO 03-02 00:30:14 [logger.py:42] Received request cmpl-87827cb43ff14330959b593741011961-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:14 [async_llm.py:261] Added request cmpl-87827cb43ff14330959b593741011961-0.
INFO 03-02 00:30:15 [logger.py:42] Received request cmpl-699655f70c5348fdabd45a2c47f7e747-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:15 [async_llm.py:261] Added request cmpl-699655f70c5348fdabd45a2c47f7e747-0.
INFO 03-02 00:30:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:30:16 [logger.py:42] Received request cmpl-fc0acddb0f114335b868257cad416241-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:16 [async_llm.py:261] Added request cmpl-fc0acddb0f114335b868257cad416241-0.
INFO 03-02 00:30:17 [logger.py:42] Received request cmpl-b3d6678935b042a9bc489d3f97b5207a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:17 [async_llm.py:261] Added request cmpl-b3d6678935b042a9bc489d3f97b5207a-0.
INFO 03-02 00:30:18 [logger.py:42] Received request cmpl-63388cfb1dff4481b88cf9ede042de3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:18 [async_llm.py:261] Added request cmpl-63388cfb1dff4481b88cf9ede042de3b-0.
INFO 03-02 00:30:19 [logger.py:42] Received request cmpl-e4e82bc5e46c4696b1fa8a59fa003acd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:19 [async_llm.py:261] Added request cmpl-e4e82bc5e46c4696b1fa8a59fa003acd-0.
INFO 03-02 00:30:20 [logger.py:42] Received request cmpl-616562b7c31b4c7d9e7c792fdf3d4c9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:20 [async_llm.py:261] Added request cmpl-616562b7c31b4c7d9e7c792fdf3d4c9c-0.
INFO 03-02 00:30:21 [logger.py:42] Received request cmpl-3cad34005d53416da0f30a2e9d4a6a5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:21 [async_llm.py:261] Added request cmpl-3cad34005d53416da0f30a2e9d4a6a5a-0.
INFO 03-02 00:30:22 [logger.py:42] Received request cmpl-df16adbbd83b4fafb45527e326dad718-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:22 [async_llm.py:261] Added request cmpl-df16adbbd83b4fafb45527e326dad718-0.
INFO 03-02 00:30:24 [logger.py:42] Received request cmpl-80a23fc5309942a3ae97bdc792129185-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:24 [async_llm.py:261] Added request cmpl-80a23fc5309942a3ae97bdc792129185-0.
INFO 03-02 00:30:25 [logger.py:42] Received request cmpl-4a6f0db8d7c14b2f9b38d9a0be9ee437-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:25 [async_llm.py:261] Added request cmpl-4a6f0db8d7c14b2f9b38d9a0be9ee437-0.
INFO 03-02 00:30:26 [logger.py:42] Received request cmpl-707e568624b344688faf60d07f7695e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:26 [async_llm.py:261] Added request cmpl-707e568624b344688faf60d07f7695e0-0.
INFO 03-02 00:30:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:30:27 [logger.py:42] Received request cmpl-a6c73a08248f4a72a9b70c99247dd52d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:27 [async_llm.py:261] Added request cmpl-a6c73a08248f4a72a9b70c99247dd52d-0.
INFO 03-02 00:30:28 [logger.py:42] Received request cmpl-63912b3442fe4f5181e6160e352a6bb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:28 [async_llm.py:261] Added request cmpl-63912b3442fe4f5181e6160e352a6bb7-0.
INFO 03-02 00:30:29 [logger.py:42] Received request cmpl-3197b8ff35c64a218930813025e0576e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:29 [async_llm.py:261] Added request cmpl-3197b8ff35c64a218930813025e0576e-0.
INFO 03-02 00:30:30 [logger.py:42] Received request cmpl-634314070f6642e59888d0a6d853be89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:30 [async_llm.py:261] Added request cmpl-634314070f6642e59888d0a6d853be89-0.
INFO 03-02 00:30:31 [logger.py:42] Received request cmpl-4482a9def4034cd8a773b51d98937cd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:31 [async_llm.py:261] Added request cmpl-4482a9def4034cd8a773b51d98937cd3-0.
INFO 03-02 00:30:32 [logger.py:42] Received request cmpl-54184b94ac044f3a9e0954ea82de71bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:32 [async_llm.py:261] Added request cmpl-54184b94ac044f3a9e0954ea82de71bc-0.
INFO 03-02 00:30:33 [logger.py:42] Received request cmpl-5f03c41e8a4c4c4daab978d9d2e7391d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:33 [async_llm.py:261] Added request cmpl-5f03c41e8a4c4c4daab978d9d2e7391d-0.
INFO 03-02 00:30:34 [logger.py:42] Received request cmpl-280e483a289a4794aee56e94bf0ed7db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:34 [async_llm.py:261] Added request cmpl-280e483a289a4794aee56e94bf0ed7db-0.
INFO 03-02 00:30:35 [logger.py:42] Received request cmpl-49052ec67cd74677a8f3ff6ecd08b7f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:35 [async_llm.py:261] Added request cmpl-49052ec67cd74677a8f3ff6ecd08b7f7-0.
INFO 03-02 00:30:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:30:37 [logger.py:42] Received request cmpl-16aeee5d3ba240ec953c58e14ee9312d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:37 [async_llm.py:261] Added request cmpl-16aeee5d3ba240ec953c58e14ee9312d-0.
INFO 03-02 00:30:38 [logger.py:42] Received request cmpl-54fac016c37741c4ad0c87b99541f1b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:38 [async_llm.py:261] Added request cmpl-54fac016c37741c4ad0c87b99541f1b5-0.
INFO 03-02 00:30:39 [logger.py:42] Received request cmpl-cb46500ecea84668857cb3f3bc17b8ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:39 [async_llm.py:261] Added request cmpl-cb46500ecea84668857cb3f3bc17b8ce-0.
INFO 03-02 00:30:40 [logger.py:42] Received request cmpl-9bfa57d68a4b4ab38eb5b7b0fa8054a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:40 [async_llm.py:261] Added request cmpl-9bfa57d68a4b4ab38eb5b7b0fa8054a8-0.
INFO 03-02 00:30:41 [logger.py:42] Received request cmpl-7aff5d1309ba4065b53e48195c4240a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:41 [async_llm.py:261] Added request cmpl-7aff5d1309ba4065b53e48195c4240a6-0.
INFO 03-02 00:30:42 [logger.py:42] Received request cmpl-e40cec6149514f468046142555676a51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:42 [async_llm.py:261] Added request cmpl-e40cec6149514f468046142555676a51-0.
INFO 03-02 00:30:43 [logger.py:42] Received request cmpl-138e88b099b1418a883e0b2a75eab3e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:43 [async_llm.py:261] Added request cmpl-138e88b099b1418a883e0b2a75eab3e7-0.
INFO 03-02 00:30:44 [logger.py:42] Received request cmpl-c993c89fdfa449208d2f779f34a70a6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:44 [async_llm.py:261] Added request cmpl-c993c89fdfa449208d2f779f34a70a6d-0.
INFO 03-02 00:30:45 [logger.py:42] Received request cmpl-7bc2017aa3a84f66995686b335bfcb18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:45 [async_llm.py:261] Added request cmpl-7bc2017aa3a84f66995686b335bfcb18-0.
INFO 03-02 00:30:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:30:46 [logger.py:42] Received request cmpl-111ce7b9401245ddb58c0d33b7c6f6d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:46 [async_llm.py:261] Added request cmpl-111ce7b9401245ddb58c0d33b7c6f6d1-0.
INFO 03-02 00:30:47 [logger.py:42] Received request cmpl-1e48059bafc34155aac8e83111ca4e53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:47 [async_llm.py:261] Added request cmpl-1e48059bafc34155aac8e83111ca4e53-0.
INFO 03-02 00:30:48 [logger.py:42] Received request cmpl-67a5277b5cb64e21802cefcbbf239348-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:48 [async_llm.py:261] Added request cmpl-67a5277b5cb64e21802cefcbbf239348-0.
INFO 03-02 00:30:50 [logger.py:42] Received request cmpl-3a98c9dcf1ed478f95ab252510cb22c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:50 [async_llm.py:261] Added request cmpl-3a98c9dcf1ed478f95ab252510cb22c6-0.
INFO 03-02 00:30:51 [logger.py:42] Received request cmpl-0fb133497d744bf18f327ccdc92735ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:51 [async_llm.py:261] Added request cmpl-0fb133497d744bf18f327ccdc92735ab-0.
INFO 03-02 00:30:52 [logger.py:42] Received request cmpl-9af33d3e559a4d6eb95a2b55a98755b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:52 [async_llm.py:261] Added request cmpl-9af33d3e559a4d6eb95a2b55a98755b3-0.
INFO 03-02 00:30:53 [logger.py:42] Received request cmpl-c8036fe78b1445719ca5cfa8ae4d8159-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:53 [async_llm.py:261] Added request cmpl-c8036fe78b1445719ca5cfa8ae4d8159-0.
INFO 03-02 00:30:54 [logger.py:42] Received request cmpl-9abab21c648d4234aa99656326fc6fac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:54 [async_llm.py:261] Added request cmpl-9abab21c648d4234aa99656326fc6fac-0.
INFO 03-02 00:30:55 [logger.py:42] Received request cmpl-65d510b2623e4371b9a6eda612663d32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:55 [async_llm.py:261] Added request cmpl-65d510b2623e4371b9a6eda612663d32-0.
INFO 03-02 00:30:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:30:56 [logger.py:42] Received request cmpl-2fefb8adfeec4424963535cadf9eab1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:56 [async_llm.py:261] Added request cmpl-2fefb8adfeec4424963535cadf9eab1c-0.
INFO 03-02 00:30:57 [logger.py:42] Received request cmpl-7ab2b0db77e9453d91013f959ce43a21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:57 [async_llm.py:261] Added request cmpl-7ab2b0db77e9453d91013f959ce43a21-0.
INFO 03-02 00:30:58 [logger.py:42] Received request cmpl-45b01daf48d34935b9f23d8be77fe6d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:58 [async_llm.py:261] Added request cmpl-45b01daf48d34935b9f23d8be77fe6d6-0.
INFO 03-02 00:30:59 [logger.py:42] Received request cmpl-36abb6a6696849348180a3c8f452facd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:59 [async_llm.py:261] Added request cmpl-36abb6a6696849348180a3c8f452facd-0.
INFO 03-02 00:31:00 [logger.py:42] Received request cmpl-93fa645f489c418a90602611ce88bfc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:00 [async_llm.py:261] Added request cmpl-93fa645f489c418a90602611ce88bfc9-0.
INFO 03-02 00:31:01 [logger.py:42] Received request cmpl-e657aa2592684d9ca0434a1f6794e35c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:01 [async_llm.py:261] Added request cmpl-e657aa2592684d9ca0434a1f6794e35c-0.
INFO 03-02 00:31:03 [logger.py:42] Received request cmpl-26f21be914fb41faa07d398af1c0dfae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:03 [async_llm.py:261] Added request cmpl-26f21be914fb41faa07d398af1c0dfae-0.
INFO 03-02 00:31:04 [logger.py:42] Received request cmpl-ce826dc2c76c4f429072ef36fe551aed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:04 [async_llm.py:261] Added request cmpl-ce826dc2c76c4f429072ef36fe551aed-0.
INFO 03-02 00:31:05 [logger.py:42] Received request cmpl-546b0d8c51fe451bb2640e1c84f73286-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:05 [async_llm.py:261] Added request cmpl-546b0d8c51fe451bb2640e1c84f73286-0.
INFO 03-02 00:31:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:31:06 [logger.py:42] Received request cmpl-96489a3d24ac4a1985d8572782eac26d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:06 [async_llm.py:261] Added request cmpl-96489a3d24ac4a1985d8572782eac26d-0.
INFO 03-02 00:31:07 [logger.py:42] Received request cmpl-90ef97726ae54fa58ffde5ed8a21536b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:07 [async_llm.py:261] Added request cmpl-90ef97726ae54fa58ffde5ed8a21536b-0.
INFO 03-02 00:31:08 [logger.py:42] Received request cmpl-f4fc60af82cd4ada9078fc9af82f5192-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:08 [async_llm.py:261] Added request cmpl-f4fc60af82cd4ada9078fc9af82f5192-0.
INFO 03-02 00:31:09 [logger.py:42] Received request cmpl-deb2074adce14496bb34132fb82ac281-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:09 [async_llm.py:261] Added request cmpl-deb2074adce14496bb34132fb82ac281-0.
INFO 03-02 00:31:10 [logger.py:42] Received request cmpl-9ae1c90627874af0be0e1eefad546f7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:10 [async_llm.py:261] Added request cmpl-9ae1c90627874af0be0e1eefad546f7e-0.
INFO 03-02 00:31:11 [logger.py:42] Received request cmpl-70e5611fe076475f9bf120440fe55fda-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:11 [async_llm.py:261] Added request cmpl-70e5611fe076475f9bf120440fe55fda-0.
INFO 03-02 00:31:12 [logger.py:42] Received request cmpl-cf3c0c5efbca4eb7877a89c4b49fd2f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:12 [async_llm.py:261] Added request cmpl-cf3c0c5efbca4eb7877a89c4b49fd2f8-0.
INFO 03-02 00:31:13 [logger.py:42] Received request cmpl-1dcd9c7359e84fe98b4649934167bf04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:13 [async_llm.py:261] Added request cmpl-1dcd9c7359e84fe98b4649934167bf04-0.
INFO 03-02 00:31:14 [logger.py:42] Received request cmpl-5cc19741e898455b85df7f1c41582b99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:14 [async_llm.py:261] Added request cmpl-5cc19741e898455b85df7f1c41582b99-0.
INFO 03-02 00:31:16 [logger.py:42] Received request cmpl-af00ef4c237e49fe8c33a4291b3d98d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:16 [async_llm.py:261] Added request cmpl-af00ef4c237e49fe8c33a4291b3d98d1-0.
INFO 03-02 00:31:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:31:17 [logger.py:42] Received request cmpl-d8a0a885ed7d40368aaf97dc023e1c23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:17 [async_llm.py:261] Added request cmpl-d8a0a885ed7d40368aaf97dc023e1c23-0.
INFO 03-02 00:31:18 [logger.py:42] Received request cmpl-0be1220a3a724baaa743e805bb62f405-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:18 [async_llm.py:261] Added request cmpl-0be1220a3a724baaa743e805bb62f405-0.
INFO 03-02 00:31:19 [logger.py:42] Received request cmpl-37eb0b6984eb450fa34c5ae8aa6c412e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:19 [async_llm.py:261] Added request cmpl-37eb0b6984eb450fa34c5ae8aa6c412e-0.
INFO 03-02 00:31:20 [logger.py:42] Received request cmpl-c35e3da17e854d5b82efa6d214070cfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:20 [async_llm.py:261] Added request cmpl-c35e3da17e854d5b82efa6d214070cfa-0.
INFO 03-02 00:31:21 [logger.py:42] Received request cmpl-5d3f45c226ae4d94bbeb95f957ffbac5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:21 [async_llm.py:261] Added request cmpl-5d3f45c226ae4d94bbeb95f957ffbac5-0.
INFO 03-02 00:31:22 [logger.py:42] Received request cmpl-12a16f81df0746138a34f4dc1d32fe4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:22 [async_llm.py:261] Added request cmpl-12a16f81df0746138a34f4dc1d32fe4d-0.
INFO 03-02 00:31:23 [logger.py:42] Received request cmpl-e68d9d6843cf4d8f8c1d304d88a1434a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:23 [async_llm.py:261] Added request cmpl-e68d9d6843cf4d8f8c1d304d88a1434a-0.
INFO 03-02 00:31:24 [logger.py:42] Received request cmpl-e64fa21621184d64b58427e2c7a0fa01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:24 [async_llm.py:261] Added request cmpl-e64fa21621184d64b58427e2c7a0fa01-0.
INFO 03-02 00:31:25 [logger.py:42] Received request cmpl-187dd98f24664ed9bcb47b9e83886921-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:25 [async_llm.py:261] Added request cmpl-187dd98f24664ed9bcb47b9e83886921-0.
INFO 03-02 00:31:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:31:26 [logger.py:42] Received request cmpl-c3482146b76e402f9829eeb3c78c0e1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:26 [async_llm.py:261] Added request cmpl-c3482146b76e402f9829eeb3c78c0e1d-0.
INFO 03-02 00:31:27 [logger.py:42] Received request cmpl-ffe866c2790246cda55aca969cb4192d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:27 [async_llm.py:261] Added request cmpl-ffe866c2790246cda55aca969cb4192d-0.
INFO 03-02 00:31:29 [logger.py:42] Received request cmpl-7606c2d256d14baba6fee9640aca61de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:29 [async_llm.py:261] Added request cmpl-7606c2d256d14baba6fee9640aca61de-0.
INFO 03-02 00:31:30 [logger.py:42] Received request cmpl-ff8ae21efb654310a267dcfa94e64493-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:30 [async_llm.py:261] Added request cmpl-ff8ae21efb654310a267dcfa94e64493-0.
INFO 03-02 00:31:31 [logger.py:42] Received request cmpl-abb26e24b7d646c9b7943e3f41f36b31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:31 [async_llm.py:261] Added request cmpl-abb26e24b7d646c9b7943e3f41f36b31-0.
INFO 03-02 00:31:32 [logger.py:42] Received request cmpl-bb1dc1f770f247f38e6ad42ae1f92226-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:32 [async_llm.py:261] Added request cmpl-bb1dc1f770f247f38e6ad42ae1f92226-0.
INFO 03-02 00:31:33 [logger.py:42] Received request cmpl-7adbe668d62c41d9bb346936c6ef329d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:33 [async_llm.py:261] Added request cmpl-7adbe668d62c41d9bb346936c6ef329d-0.
INFO 03-02 00:31:34 [logger.py:42] Received request cmpl-fd0b47134d944a2c8fcc3dd194a0280c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:34 [async_llm.py:261] Added request cmpl-fd0b47134d944a2c8fcc3dd194a0280c-0.
INFO 03-02 00:31:35 [logger.py:42] Received request cmpl-9b8e52c55db34ef880bab73b52123f6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:35 [async_llm.py:261] Added request cmpl-9b8e52c55db34ef880bab73b52123f6f-0.
INFO 03-02 00:31:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:31:36 [logger.py:42] Received request cmpl-59f2f37f062045b781f4579af0f23016-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:36 [async_llm.py:261] Added request cmpl-59f2f37f062045b781f4579af0f23016-0.
INFO 03-02 00:31:37 [logger.py:42] Received request cmpl-eaca88812e4641878e6cda7ca29291a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:37 [async_llm.py:261] Added request cmpl-eaca88812e4641878e6cda7ca29291a5-0.
INFO 03-02 00:31:38 [logger.py:42] Received request cmpl-7bd4857304c949ed99bb11b00af16f38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:38 [async_llm.py:261] Added request cmpl-7bd4857304c949ed99bb11b00af16f38-0.
INFO 03-02 00:31:39 [logger.py:42] Received request cmpl-13c6277884bd432c8a1762ea6db6c001-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:39 [async_llm.py:261] Added request cmpl-13c6277884bd432c8a1762ea6db6c001-0.
INFO 03-02 00:31:41 [logger.py:42] Received request cmpl-ed165b7478914bec9ca489f54513bdd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:41 [async_llm.py:261] Added request cmpl-ed165b7478914bec9ca489f54513bdd5-0.
INFO 03-02 00:31:42 [logger.py:42] Received request cmpl-c309bd10a123407589223c14ef7994f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:42 [async_llm.py:261] Added request cmpl-c309bd10a123407589223c14ef7994f9-0.
INFO 03-02 00:31:43 [logger.py:42] Received request cmpl-5a00e4494f95471cad677d4c94c80766-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:43 [async_llm.py:261] Added request cmpl-5a00e4494f95471cad677d4c94c80766-0.
INFO 03-02 00:31:44 [logger.py:42] Received request cmpl-9ff2f2012f6147ea95d8221a7ad9bf6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:44 [async_llm.py:261] Added request cmpl-9ff2f2012f6147ea95d8221a7ad9bf6c-0.
INFO 03-02 00:31:45 [logger.py:42] Received request cmpl-ef26901ee96a4032a242aac3a431dba6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:45 [async_llm.py:261] Added request cmpl-ef26901ee96a4032a242aac3a431dba6-0.
INFO 03-02 00:31:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:31:46 [logger.py:42] Received request cmpl-b5d838363d754411b9c95c59f26212b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:46 [async_llm.py:261] Added request cmpl-b5d838363d754411b9c95c59f26212b0-0.
INFO 03-02 00:31:47 [logger.py:42] Received request cmpl-b82c8d5e4091433db3bc1a3c20bc216d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:47 [async_llm.py:261] Added request cmpl-b82c8d5e4091433db3bc1a3c20bc216d-0.
INFO 03-02 00:31:48 [logger.py:42] Received request cmpl-ac83898a45ed4aff8b4526b6cf75c971-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:48 [async_llm.py:261] Added request cmpl-ac83898a45ed4aff8b4526b6cf75c971-0.
INFO 03-02 00:31:49 [logger.py:42] Received request cmpl-f2f20a05e1cf4233aabb0b2dcea8beff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:49 [async_llm.py:261] Added request cmpl-f2f20a05e1cf4233aabb0b2dcea8beff-0.
INFO 03-02 00:31:50 [logger.py:42] Received request cmpl-31b4eac1c2b74583a86a9e741e70c6eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:50 [async_llm.py:261] Added request cmpl-31b4eac1c2b74583a86a9e741e70c6eb-0.
INFO 03-02 00:31:51 [logger.py:42] Received request cmpl-ea15c3c5135344ff88dd13ec9375442e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:51 [async_llm.py:261] Added request cmpl-ea15c3c5135344ff88dd13ec9375442e-0.
INFO 03-02 00:31:52 [logger.py:42] Received request cmpl-8db236ec6dc744d185c82906d5710d38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:52 [async_llm.py:261] Added request cmpl-8db236ec6dc744d185c82906d5710d38-0.
INFO 03-02 00:31:54 [logger.py:42] Received request cmpl-c923ab0fbbf44d4ca838f170cc1800d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:54 [async_llm.py:261] Added request cmpl-c923ab0fbbf44d4ca838f170cc1800d3-0.
INFO 03-02 00:31:55 [logger.py:42] Received request cmpl-aea080ed4c7e4d23ad6e8beb72ac14cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:55 [async_llm.py:261] Added request cmpl-aea080ed4c7e4d23ad6e8beb72ac14cc-0.
INFO 03-02 00:31:56 [logger.py:42] Received request cmpl-5ee14ff61fdb4b3789c2e8a9a2cd4d13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:56 [async_llm.py:261] Added request cmpl-5ee14ff61fdb4b3789c2e8a9a2cd4d13-0.
INFO 03-02 00:31:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:31:57 [logger.py:42] Received request cmpl-28c46230660b48619a6b780aa1564aa1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:57 [async_llm.py:261] Added request cmpl-28c46230660b48619a6b780aa1564aa1-0.
INFO 03-02 00:31:58 [logger.py:42] Received request cmpl-33e98779df68412d95c1b3ed810faddd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:58 [async_llm.py:261] Added request cmpl-33e98779df68412d95c1b3ed810faddd-0.
INFO 03-02 00:31:59 [logger.py:42] Received request cmpl-d3256871670c41339c7a870b1aae02f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:59 [async_llm.py:261] Added request cmpl-d3256871670c41339c7a870b1aae02f3-0.
INFO 03-02 00:32:00 [logger.py:42] Received request cmpl-91910fe6b97447aca3410494ece49e9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:00 [async_llm.py:261] Added request cmpl-91910fe6b97447aca3410494ece49e9c-0.
INFO 03-02 00:32:01 [logger.py:42] Received request cmpl-5414cdb726b9426ea65baf239acf8e15-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:01 [async_llm.py:261] Added request cmpl-5414cdb726b9426ea65baf239acf8e15-0.
INFO 03-02 00:32:02 [logger.py:42] Received request cmpl-6127f1f56ab4418790fb2a9dc9f7267f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:02 [async_llm.py:261] Added request cmpl-6127f1f56ab4418790fb2a9dc9f7267f-0.
INFO 03-02 00:32:03 [logger.py:42] Received request cmpl-206d66215dda40df89f6d1354533f4a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:03 [async_llm.py:261] Added request cmpl-206d66215dda40df89f6d1354533f4a6-0.
INFO 03-02 00:32:04 [logger.py:42] Received request cmpl-99bd6b22bb614da497cc3ccdd75270db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:04 [async_llm.py:261] Added request cmpl-99bd6b22bb614da497cc3ccdd75270db-0.
INFO 03-02 00:32:05 [logger.py:42] Received request cmpl-8fdbc8a528c8468d93879615a9079d54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:05 [async_llm.py:261] Added request cmpl-8fdbc8a528c8468d93879615a9079d54-0.
INFO 03-02 00:32:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:32:07 [logger.py:42] Received request cmpl-146eb63cbcd34b52a96b78eeec5a5515-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:07 [async_llm.py:261] Added request cmpl-146eb63cbcd34b52a96b78eeec5a5515-0.
INFO 03-02 00:32:08 [logger.py:42] Received request cmpl-35d1e4ac7f564c5586018634d69e0eb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:08 [async_llm.py:261] Added request cmpl-35d1e4ac7f564c5586018634d69e0eb6-0.
INFO 03-02 00:32:09 [logger.py:42] Received request cmpl-af385530ad5f43a2a69584ac2a4e49aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:09 [async_llm.py:261] Added request cmpl-af385530ad5f43a2a69584ac2a4e49aa-0.
INFO 03-02 00:32:10 [logger.py:42] Received request cmpl-ec4bd65121a341e09ca6175a3528f318-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:10 [async_llm.py:261] Added request cmpl-ec4bd65121a341e09ca6175a3528f318-0.
INFO 03-02 00:32:11 [logger.py:42] Received request cmpl-2cdf84533b1b46dd8e7bd8b044dae10b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:11 [async_llm.py:261] Added request cmpl-2cdf84533b1b46dd8e7bd8b044dae10b-0.
INFO 03-02 00:32:12 [logger.py:42] Received request cmpl-32156a433f7f4e74b6e6b4e6c509615b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:12 [async_llm.py:261] Added request cmpl-32156a433f7f4e74b6e6b4e6c509615b-0.
INFO 03-02 00:32:13 [logger.py:42] Received request cmpl-581a3c9766f248dbbe9417405579e035-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:13 [async_llm.py:261] Added request cmpl-581a3c9766f248dbbe9417405579e035-0.
INFO 03-02 00:32:14 [logger.py:42] Received request cmpl-bd1ac0a09cc94686814cda7c46f9edf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:14 [async_llm.py:261] Added request cmpl-bd1ac0a09cc94686814cda7c46f9edf4-0.
INFO 03-02 00:32:15 [logger.py:42] Received request cmpl-960fa77eaa0049cea7b1ba7943ba328b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:15 [async_llm.py:261] Added request cmpl-960fa77eaa0049cea7b1ba7943ba328b-0.
INFO 03-02 00:32:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:32:16 [logger.py:42] Received request cmpl-7764123dee6e4309ab66a009c1b744e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:16 [async_llm.py:261] Added request cmpl-7764123dee6e4309ab66a009c1b744e4-0.
INFO 03-02 00:32:17 [logger.py:42] Received request cmpl-ea9a4f62c33a41f3ae368dc07204ba29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:17 [async_llm.py:261] Added request cmpl-ea9a4f62c33a41f3ae368dc07204ba29-0.
INFO 03-02 00:32:18 [logger.py:42] Received request cmpl-bceeb0f16d5541b4bb8cb23851cde6a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:18 [async_llm.py:261] Added request cmpl-bceeb0f16d5541b4bb8cb23851cde6a3-0.
INFO 03-02 00:32:20 [logger.py:42] Received request cmpl-f94c369a3e03426dadb5cd172ec7376e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:20 [async_llm.py:261] Added request cmpl-f94c369a3e03426dadb5cd172ec7376e-0.
INFO 03-02 00:32:21 [logger.py:42] Received request cmpl-1d1e4f08b50649ccb5bbdf99de0f5d87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:21 [async_llm.py:261] Added request cmpl-1d1e4f08b50649ccb5bbdf99de0f5d87-0.
INFO 03-02 00:32:22 [logger.py:42] Received request cmpl-d5ab71a725ac4f258f98c634cef520d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:22 [async_llm.py:261] Added request cmpl-d5ab71a725ac4f258f98c634cef520d9-0.
INFO 03-02 00:32:23 [logger.py:42] Received request cmpl-d7afd7a145054b5d9610ca9c3978a64c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:23 [async_llm.py:261] Added request cmpl-d7afd7a145054b5d9610ca9c3978a64c-0.
INFO 03-02 00:32:24 [logger.py:42] Received request cmpl-89d2c5b823db425bbcdff277b7ea6181-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:24 [async_llm.py:261] Added request cmpl-89d2c5b823db425bbcdff277b7ea6181-0.
INFO 03-02 00:32:25 [logger.py:42] Received request cmpl-4c9956f4667e486e91e2043997d60108-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:25 [async_llm.py:261] Added request cmpl-4c9956f4667e486e91e2043997d60108-0.
INFO 03-02 00:32:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:32:26 [logger.py:42] Received request cmpl-c92df786d64b443fb0efe32e61e57b8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:26 [async_llm.py:261] Added request cmpl-c92df786d64b443fb0efe32e61e57b8d-0.
INFO 03-02 00:32:27 [logger.py:42] Received request cmpl-5aba0b7f8c4f43e98d3b4bd91f98670a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:27 [async_llm.py:261] Added request cmpl-5aba0b7f8c4f43e98d3b4bd91f98670a-0.
INFO 03-02 00:32:28 [logger.py:42] Received request cmpl-e31d9e1ff26b4de1a535d605a6c6592e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:28 [async_llm.py:261] Added request cmpl-e31d9e1ff26b4de1a535d605a6c6592e-0.
INFO 03-02 00:32:29 [logger.py:42] Received request cmpl-81f9a77c2655493fadd4cb0972879593-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:29 [async_llm.py:261] Added request cmpl-81f9a77c2655493fadd4cb0972879593-0.
INFO 03-02 00:32:30 [logger.py:42] Received request cmpl-58e9add5c04e43daa74ff7dcf29b42d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:30 [async_llm.py:261] Added request cmpl-58e9add5c04e43daa74ff7dcf29b42d4-0.
INFO 03-02 00:32:31 [logger.py:42] Received request cmpl-d602c4abcd94443ebe558dd768baec10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:31 [async_llm.py:261] Added request cmpl-d602c4abcd94443ebe558dd768baec10-0.
INFO 03-02 00:32:33 [logger.py:42] Received request cmpl-075dd9ccc9c2444fb9dafcfb0ae4e515-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:33 [async_llm.py:261] Added request cmpl-075dd9ccc9c2444fb9dafcfb0ae4e515-0.
INFO 03-02 00:32:34 [logger.py:42] Received request cmpl-fcc45b02a7e347ec9a7555ef08eaefd7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:34 [async_llm.py:261] Added request cmpl-fcc45b02a7e347ec9a7555ef08eaefd7-0.
INFO 03-02 00:32:35 [logger.py:42] Received request cmpl-8d2c180cb9e8417998b9b9df929cc308-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:35 [async_llm.py:261] Added request cmpl-8d2c180cb9e8417998b9b9df929cc308-0.
INFO 03-02 00:32:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:32:36 [logger.py:42] Received request cmpl-b8b6d151f2df46cba71445588f49cfc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:36 [async_llm.py:261] Added request cmpl-b8b6d151f2df46cba71445588f49cfc1-0.
INFO 03-02 00:32:37 [logger.py:42] Received request cmpl-b2bc16eb43ac492db50202b6c728d966-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:37 [async_llm.py:261] Added request cmpl-b2bc16eb43ac492db50202b6c728d966-0.
INFO 03-02 00:32:38 [logger.py:42] Received request cmpl-c2a28306d4bc4380a3a56356ae815937-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:38 [async_llm.py:261] Added request cmpl-c2a28306d4bc4380a3a56356ae815937-0.
INFO 03-02 00:32:39 [logger.py:42] Received request cmpl-80fca908004249a88cf227d85633600d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:39 [async_llm.py:261] Added request cmpl-80fca908004249a88cf227d85633600d-0.
INFO 03-02 00:32:40 [logger.py:42] Received request cmpl-ef6ba774e767482aa5c915065ff5131d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:40 [async_llm.py:261] Added request cmpl-ef6ba774e767482aa5c915065ff5131d-0.
INFO 03-02 00:32:41 [logger.py:42] Received request cmpl-6cb5db3f625c45b8a4a23268f7fff72e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:41 [async_llm.py:261] Added request cmpl-6cb5db3f625c45b8a4a23268f7fff72e-0.
INFO 03-02 00:32:42 [logger.py:42] Received request cmpl-857d34fdca314352a8ad2e41926cdc3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:42 [async_llm.py:261] Added request cmpl-857d34fdca314352a8ad2e41926cdc3f-0.
INFO 03-02 00:32:43 [logger.py:42] Received request cmpl-b9ee85e2b9e1435fbc1d60259cd1f34e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:43 [async_llm.py:261] Added request cmpl-b9ee85e2b9e1435fbc1d60259cd1f34e-0.
INFO 03-02 00:32:44 [logger.py:42] Received request cmpl-6cbc58d36de7411ea6021140786f1e6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:44 [async_llm.py:261] Added request cmpl-6cbc58d36de7411ea6021140786f1e6f-0.
INFO 03-02 00:32:46 [logger.py:42] Received request cmpl-b75fa2c231784bf39cd5f1e72c3bf1d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:46 [async_llm.py:261] Added request cmpl-b75fa2c231784bf39cd5f1e72c3bf1d1-0.
INFO 03-02 00:32:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:32:47 [logger.py:42] Received request cmpl-c48546c85c364028add4a96f99128cfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:47 [async_llm.py:261] Added request cmpl-c48546c85c364028add4a96f99128cfa-0.
INFO 03-02 00:32:48 [logger.py:42] Received request cmpl-b53da3e9a5de44639fe7ebe3667eaf36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:48 [async_llm.py:261] Added request cmpl-b53da3e9a5de44639fe7ebe3667eaf36-0.
INFO 03-02 00:32:49 [logger.py:42] Received request cmpl-e9757cfafb474c7fa5267b67e6e2f1a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:49 [async_llm.py:261] Added request cmpl-e9757cfafb474c7fa5267b67e6e2f1a6-0.
INFO 03-02 00:32:50 [logger.py:42] Received request cmpl-8f46634c4df84c1f86dfe79e64babc73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:50 [async_llm.py:261] Added request cmpl-8f46634c4df84c1f86dfe79e64babc73-0.
INFO 03-02 00:32:51 [logger.py:42] Received request cmpl-b5c301cbbe8e44408c8da31d7bc291e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:51 [async_llm.py:261] Added request cmpl-b5c301cbbe8e44408c8da31d7bc291e6-0.
INFO 03-02 00:32:52 [logger.py:42] Received request cmpl-61ac8736cfc44bb0892b344ac228cf54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:52 [async_llm.py:261] Added request cmpl-61ac8736cfc44bb0892b344ac228cf54-0.
INFO 03-02 00:32:53 [logger.py:42] Received request cmpl-0e96aca74e724b76830c785cd7332013-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:53 [async_llm.py:261] Added request cmpl-0e96aca74e724b76830c785cd7332013-0.
INFO 03-02 00:32:54 [logger.py:42] Received request cmpl-c5498ce739f1435e9a149556f1093376-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:54 [async_llm.py:261] Added request cmpl-c5498ce739f1435e9a149556f1093376-0.
INFO 03-02 00:32:55 [logger.py:42] Received request cmpl-e977deccd8a84f30af85aa07ace69c03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:55 [async_llm.py:261] Added request cmpl-e977deccd8a84f30af85aa07ace69c03-0.
INFO 03-02 00:32:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:32:56 [logger.py:42] Received request cmpl-4d755d096fa24445b2e2c6c285724a70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:56 [async_llm.py:261] Added request cmpl-4d755d096fa24445b2e2c6c285724a70-0.
INFO 03-02 00:32:57 [logger.py:42] Received request cmpl-7c375113fc4e401486dc2561d7f08285-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:57 [async_llm.py:261] Added request cmpl-7c375113fc4e401486dc2561d7f08285-0.
INFO 03-02 00:32:59 [logger.py:42] Received request cmpl-55bd0b982a1d4c42a11c4d8ff7ba2a9f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:59 [async_llm.py:261] Added request cmpl-55bd0b982a1d4c42a11c4d8ff7ba2a9f-0.
INFO 03-02 00:33:00 [logger.py:42] Received request cmpl-1c2caea72faf4525956ce902fea4987b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:00 [async_llm.py:261] Added request cmpl-1c2caea72faf4525956ce902fea4987b-0.
INFO 03-02 00:33:01 [logger.py:42] Received request cmpl-70e6f025c06041ceabbd27d1e4c8562d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:01 [async_llm.py:261] Added request cmpl-70e6f025c06041ceabbd27d1e4c8562d-0.
INFO 03-02 00:33:02 [logger.py:42] Received request cmpl-d153903f30274db49b151c385c6591ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:02 [async_llm.py:261] Added request cmpl-d153903f30274db49b151c385c6591ef-0.
INFO 03-02 00:33:03 [logger.py:42] Received request cmpl-3ade0056db0c45b88621ed2a2178fe77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:03 [async_llm.py:261] Added request cmpl-3ade0056db0c45b88621ed2a2178fe77-0.
INFO 03-02 00:33:04 [logger.py:42] Received request cmpl-bacf49df7e84421b90421bb8e7af7d18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:04 [async_llm.py:261] Added request cmpl-bacf49df7e84421b90421bb8e7af7d18-0.
INFO 03-02 00:33:05 [logger.py:42] Received request cmpl-e6ff2daf3e41438e96ac988200761a5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:05 [async_llm.py:261] Added request cmpl-e6ff2daf3e41438e96ac988200761a5e-0.
INFO 03-02 00:33:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:33:06 [logger.py:42] Received request cmpl-c187d8cc2880423da5aafeaa725b19eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:06 [async_llm.py:261] Added request cmpl-c187d8cc2880423da5aafeaa725b19eb-0.
INFO 03-02 00:33:07 [logger.py:42] Received request cmpl-b7213c8985c34522bd128244613c9672-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:07 [async_llm.py:261] Added request cmpl-b7213c8985c34522bd128244613c9672-0.
INFO 03-02 00:33:08 [logger.py:42] Received request cmpl-dcab30647c6c4854a9f4135b5019d3a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:08 [async_llm.py:261] Added request cmpl-dcab30647c6c4854a9f4135b5019d3a4-0.
INFO 03-02 00:33:09 [logger.py:42] Received request cmpl-81c60d08980f4ec1bd6924bc35e48eb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:09 [async_llm.py:261] Added request cmpl-81c60d08980f4ec1bd6924bc35e48eb6-0.
INFO 03-02 00:33:11 [logger.py:42] Received request cmpl-1c5ecc9707a049cda575836c2840d616-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:11 [async_llm.py:261] Added request cmpl-1c5ecc9707a049cda575836c2840d616-0.
INFO 03-02 00:33:12 [logger.py:42] Received request cmpl-d66ec97ad66f4e7d815c167e6dc9b7d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:12 [async_llm.py:261] Added request cmpl-d66ec97ad66f4e7d815c167e6dc9b7d2-0.
INFO 03-02 00:33:13 [logger.py:42] Received request cmpl-4271d37a554445689ed0177ad107ec75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:13 [async_llm.py:261] Added request cmpl-4271d37a554445689ed0177ad107ec75-0.
INFO 03-02 00:33:14 [logger.py:42] Received request cmpl-c8829cd93c55451bb707583f661980ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:14 [async_llm.py:261] Added request cmpl-c8829cd93c55451bb707583f661980ce-0.
INFO 03-02 00:33:15 [logger.py:42] Received request cmpl-c34f6b275e1d4d2589af2bfe4542fde7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:15 [async_llm.py:261] Added request cmpl-c34f6b275e1d4d2589af2bfe4542fde7-0.
INFO 03-02 00:33:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:33:16 [logger.py:42] Received request cmpl-f3385e13659e4de79b0b9715e50663ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:16 [async_llm.py:261] Added request cmpl-f3385e13659e4de79b0b9715e50663ee-0.
INFO 03-02 00:33:17 [logger.py:42] Received request cmpl-f296bfe3a2b942e5880ec950bd89b705-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:17 [async_llm.py:261] Added request cmpl-f296bfe3a2b942e5880ec950bd89b705-0.
INFO 03-02 00:33:18 [logger.py:42] Received request cmpl-d1200198283f48349e8ed4c078f2fb49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:18 [async_llm.py:261] Added request cmpl-d1200198283f48349e8ed4c078f2fb49-0.
INFO 03-02 00:33:19 [logger.py:42] Received request cmpl-83b2d88ca06e4b87961e1194d02d758e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:19 [async_llm.py:261] Added request cmpl-83b2d88ca06e4b87961e1194d02d758e-0.
INFO 03-02 00:33:20 [logger.py:42] Received request cmpl-84a1f874281c4a75ab83bfbc6084b19e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:20 [async_llm.py:261] Added request cmpl-84a1f874281c4a75ab83bfbc6084b19e-0.
INFO 03-02 00:33:21 [logger.py:42] Received request cmpl-dc83fcf5e5a5454fbb93e90703dde75b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:21 [async_llm.py:261] Added request cmpl-dc83fcf5e5a5454fbb93e90703dde75b-0.
INFO 03-02 00:33:22 [logger.py:42] Received request cmpl-4a9ea1273d814a2e8e18b7564d9ecb48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:22 [async_llm.py:261] Added request cmpl-4a9ea1273d814a2e8e18b7564d9ecb48-0.
INFO 03-02 00:33:24 [logger.py:42] Received request cmpl-02a85b0bcd514be1a088829ba301ac3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:24 [async_llm.py:261] Added request cmpl-02a85b0bcd514be1a088829ba301ac3d-0.
INFO 03-02 00:33:25 [logger.py:42] Received request cmpl-b595cb80e23d4ff5943800af2fb18e4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:25 [async_llm.py:261] Added request cmpl-b595cb80e23d4ff5943800af2fb18e4e-0.
INFO 03-02 00:33:26 [logger.py:42] Received request cmpl-52b1f271dfd64b44b4ba57b7a7f67703-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:26 [async_llm.py:261] Added request cmpl-52b1f271dfd64b44b4ba57b7a7f67703-0.
INFO 03-02 00:33:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:33:27 [logger.py:42] Received request cmpl-ef415805ec6c4a3b85072ea11a7dfda1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:27 [async_llm.py:261] Added request cmpl-ef415805ec6c4a3b85072ea11a7dfda1-0.
INFO 03-02 00:33:28 [logger.py:42] Received request cmpl-6b0ff648fd9249558e8e862fe83e85ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:28 [async_llm.py:261] Added request cmpl-6b0ff648fd9249558e8e862fe83e85ed-0.
INFO 03-02 00:33:29 [logger.py:42] Received request cmpl-f6c7071fa4344f51922a0082913e0294-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:29 [async_llm.py:261] Added request cmpl-f6c7071fa4344f51922a0082913e0294-0.
INFO 03-02 00:33:30 [logger.py:42] Received request cmpl-55cea49772f3489091f06fe2e66b1ab1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:30 [async_llm.py:261] Added request cmpl-55cea49772f3489091f06fe2e66b1ab1-0.
INFO 03-02 00:33:31 [logger.py:42] Received request cmpl-fc2e1a4644d34395ae4512f8572486f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:31 [async_llm.py:261] Added request cmpl-fc2e1a4644d34395ae4512f8572486f3-0.
INFO 03-02 00:33:32 [logger.py:42] Received request cmpl-87166a787bb94a4cbeb8a130f3b9ce79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:32 [async_llm.py:261] Added request cmpl-87166a787bb94a4cbeb8a130f3b9ce79-0.
INFO 03-02 00:33:33 [logger.py:42] Received request cmpl-22f457d38d0447228dccbdd166bdcb40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:33 [async_llm.py:261] Added request cmpl-22f457d38d0447228dccbdd166bdcb40-0.
INFO 03-02 00:33:34 [logger.py:42] Received request cmpl-ab9ffda7e615457594b1392d035a7854-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:34 [async_llm.py:261] Added request cmpl-ab9ffda7e615457594b1392d035a7854-0.
INFO 03-02 00:33:35 [logger.py:42] Received request cmpl-e7c8a19cfbf64305aba5764d62a6c909-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:35 [async_llm.py:261] Added request cmpl-e7c8a19cfbf64305aba5764d62a6c909-0.
INFO 03-02 00:33:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:33:37 [logger.py:42] Received request cmpl-53bb42b14cc84aa590baa225312b9212-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:37 [async_llm.py:261] Added request cmpl-53bb42b14cc84aa590baa225312b9212-0.
INFO 03-02 00:33:38 [logger.py:42] Received request cmpl-dafb1ff0989b4a13af89f09f75b8ca76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:38 [async_llm.py:261] Added request cmpl-dafb1ff0989b4a13af89f09f75b8ca76-0.
INFO 03-02 00:33:39 [logger.py:42] Received request cmpl-d94297d293294a59aebd82e4b87222b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:39 [async_llm.py:261] Added request cmpl-d94297d293294a59aebd82e4b87222b1-0.
INFO 03-02 00:33:40 [logger.py:42] Received request cmpl-c9e26671fabe4bdb92c26f4dc058473c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:40 [async_llm.py:261] Added request cmpl-c9e26671fabe4bdb92c26f4dc058473c-0.
INFO 03-02 00:33:41 [logger.py:42] Received request cmpl-e14eb97438794cf2a183d5405ca95c6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:41 [async_llm.py:261] Added request cmpl-e14eb97438794cf2a183d5405ca95c6e-0.
INFO 03-02 00:33:42 [logger.py:42] Received request cmpl-5a2b1bc12bdd4a1f84a603acf34529fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:42 [async_llm.py:261] Added request cmpl-5a2b1bc12bdd4a1f84a603acf34529fe-0.
INFO 03-02 00:33:43 [logger.py:42] Received request cmpl-4ba9c360277740ceb53436ad8e73d0ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:43 [async_llm.py:261] Added request cmpl-4ba9c360277740ceb53436ad8e73d0ef-0.
INFO 03-02 00:33:44 [logger.py:42] Received request cmpl-8a476be6532e403ea1f3bd48607c3ac0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:44 [async_llm.py:261] Added request cmpl-8a476be6532e403ea1f3bd48607c3ac0-0.
INFO 03-02 00:33:45 [logger.py:42] Received request cmpl-6ba7b64f7ae448f3909ba0b74980c914-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:45 [async_llm.py:261] Added request cmpl-6ba7b64f7ae448f3909ba0b74980c914-0.
INFO 03-02 00:33:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:33:46 [logger.py:42] Received request cmpl-52783b89a5444492a0886d11a832970f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:46 [async_llm.py:261] Added request cmpl-52783b89a5444492a0886d11a832970f-0.
INFO 03-02 00:33:47 [logger.py:42] Received request cmpl-df49e935e10b43c0b31e63a07a1dadfc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:47 [async_llm.py:261] Added request cmpl-df49e935e10b43c0b31e63a07a1dadfc-0.
INFO 03-02 00:33:48 [logger.py:42] Received request cmpl-bb13626908464697ab9579a1875916ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:48 [async_llm.py:261] Added request cmpl-bb13626908464697ab9579a1875916ed-0.
INFO 03-02 00:33:50 [logger.py:42] Received request cmpl-30b14e3932ff4861ab38f19cbc990b21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:50 [async_llm.py:261] Added request cmpl-30b14e3932ff4861ab38f19cbc990b21-0.
INFO 03-02 00:33:51 [logger.py:42] Received request cmpl-1c6a251e8c2a4d50b0fc1a7e7104402a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:51 [async_llm.py:261] Added request cmpl-1c6a251e8c2a4d50b0fc1a7e7104402a-0.
INFO 03-02 00:33:52 [logger.py:42] Received request cmpl-e293a221d299494a9a2dc7facea6de3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:52 [async_llm.py:261] Added request cmpl-e293a221d299494a9a2dc7facea6de3c-0.
INFO 03-02 00:33:53 [logger.py:42] Received request cmpl-c1bcaa71038c47428aa90bb3e3226d22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:53 [async_llm.py:261] Added request cmpl-c1bcaa71038c47428aa90bb3e3226d22-0.
INFO 03-02 00:33:54 [logger.py:42] Received request cmpl-51deb2b1067447ccaab9b49dd549c362-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:54 [async_llm.py:261] Added request cmpl-51deb2b1067447ccaab9b49dd549c362-0.
INFO 03-02 00:33:55 [logger.py:42] Received request cmpl-772c11f070c649039989e3f3005ef067-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:55 [async_llm.py:261] Added request cmpl-772c11f070c649039989e3f3005ef067-0.
INFO 03-02 00:33:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:33:56 [logger.py:42] Received request cmpl-d40a509ee0484373827f2ccf7e69223e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:56 [async_llm.py:261] Added request cmpl-d40a509ee0484373827f2ccf7e69223e-0.
INFO 03-02 00:33:57 [logger.py:42] Received request cmpl-f4d9f85ea52e4ac0b0e72dc5bd8cb38d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:57 [async_llm.py:261] Added request cmpl-f4d9f85ea52e4ac0b0e72dc5bd8cb38d-0.
INFO 03-02 00:33:58 [logger.py:42] Received request cmpl-eadc8d65f6114a36b289203582ac86b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:58 [async_llm.py:261] Added request cmpl-eadc8d65f6114a36b289203582ac86b0-0.
INFO 03-02 00:33:59 [logger.py:42] Received request cmpl-3b5f905a9f3c4eaeae0a43148f5e9c87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:59 [async_llm.py:261] Added request cmpl-3b5f905a9f3c4eaeae0a43148f5e9c87-0.
INFO 03-02 00:34:00 [logger.py:42] Received request cmpl-dca099cbf0dd4b20b81c5246cc406d66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:00 [async_llm.py:261] Added request cmpl-dca099cbf0dd4b20b81c5246cc406d66-0.
INFO 03-02 00:34:01 [logger.py:42] Received request cmpl-64fd66993e3c4757abb4d5edee58e2d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:01 [async_llm.py:261] Added request cmpl-64fd66993e3c4757abb4d5edee58e2d9-0.
INFO 03-02 00:34:03 [logger.py:42] Received request cmpl-26e5956c2ed547418715ac49126923b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:03 [async_llm.py:261] Added request cmpl-26e5956c2ed547418715ac49126923b3-0.
INFO 03-02 00:34:04 [logger.py:42] Received request cmpl-6f9e4574781c4dc99d53b8ad67297ccb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:04 [async_llm.py:261] Added request cmpl-6f9e4574781c4dc99d53b8ad67297ccb-0.
INFO 03-02 00:34:05 [logger.py:42] Received request cmpl-0dfbe382640d46aba7bdec8c65491ca9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:05 [async_llm.py:261] Added request cmpl-0dfbe382640d46aba7bdec8c65491ca9-0.
INFO 03-02 00:34:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:34:06 [logger.py:42] Received request cmpl-02979ec1cf8e4d55946467391d85bab6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:06 [async_llm.py:261] Added request cmpl-02979ec1cf8e4d55946467391d85bab6-0.
INFO 03-02 00:34:07 [logger.py:42] Received request cmpl-950373842d9b4d01a1bcd47b96be88c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:07 [async_llm.py:261] Added request cmpl-950373842d9b4d01a1bcd47b96be88c3-0.
INFO 03-02 00:34:08 [logger.py:42] Received request cmpl-07fa4bfa5567473f86c34f54374ea25b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:08 [async_llm.py:261] Added request cmpl-07fa4bfa5567473f86c34f54374ea25b-0.
INFO 03-02 00:34:09 [logger.py:42] Received request cmpl-47178712949d4de49f19b3b9f4d5fca9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:09 [async_llm.py:261] Added request cmpl-47178712949d4de49f19b3b9f4d5fca9-0.
INFO 03-02 00:34:10 [logger.py:42] Received request cmpl-a7742f5cdc5648d6bf67ecb6aec769de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:10 [async_llm.py:261] Added request cmpl-a7742f5cdc5648d6bf67ecb6aec769de-0.
INFO 03-02 00:34:11 [logger.py:42] Received request cmpl-ad392fe5a01640b3aaf49dc90dff227d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:11 [async_llm.py:261] Added request cmpl-ad392fe5a01640b3aaf49dc90dff227d-0.
INFO 03-02 00:34:12 [logger.py:42] Received request cmpl-35eb016afd3c4e55bd90e9a5119e86e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:12 [async_llm.py:261] Added request cmpl-35eb016afd3c4e55bd90e9a5119e86e6-0.
INFO 03-02 00:34:13 [logger.py:42] Received request cmpl-61b8fa72054b4d32a92ced4cee236fae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:13 [async_llm.py:261] Added request cmpl-61b8fa72054b4d32a92ced4cee236fae-0.
INFO 03-02 00:34:14 [logger.py:42] Received request cmpl-2a3af16e935544b599d74449a4f1850e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:14 [async_llm.py:261] Added request cmpl-2a3af16e935544b599d74449a4f1850e-0.
INFO 03-02 00:34:16 [logger.py:42] Received request cmpl-281899d622c14e5a81a4738aa450bed9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:16 [async_llm.py:261] Added request cmpl-281899d622c14e5a81a4738aa450bed9-0.
INFO 03-02 00:34:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:34:17 [logger.py:42] Received request cmpl-190e9cc496a445318dfabf7536fae7cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:17 [async_llm.py:261] Added request cmpl-190e9cc496a445318dfabf7536fae7cf-0.
INFO 03-02 00:34:18 [logger.py:42] Received request cmpl-c667bcce3c8d4c33909028b866c98261-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:18 [async_llm.py:261] Added request cmpl-c667bcce3c8d4c33909028b866c98261-0.
INFO 03-02 00:34:19 [logger.py:42] Received request cmpl-0838038d4b724417a4838e5f74d17526-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:19 [async_llm.py:261] Added request cmpl-0838038d4b724417a4838e5f74d17526-0.
INFO 03-02 00:34:20 [logger.py:42] Received request cmpl-dc27a35c598d4876a6194925090664ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:20 [async_llm.py:261] Added request cmpl-dc27a35c598d4876a6194925090664ca-0.
INFO 03-02 00:34:21 [logger.py:42] Received request cmpl-8348f2a015524c1bb6fb43e7d87fbce4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:21 [async_llm.py:261] Added request cmpl-8348f2a015524c1bb6fb43e7d87fbce4-0.
INFO 03-02 00:34:22 [logger.py:42] Received request cmpl-c939d7415b0944d4b8b46dcbfe31da2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:22 [async_llm.py:261] Added request cmpl-c939d7415b0944d4b8b46dcbfe31da2f-0.
INFO 03-02 00:34:23 [logger.py:42] Received request cmpl-12a8cc53dbd942f2a0713be6d55f5188-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:23 [async_llm.py:261] Added request cmpl-12a8cc53dbd942f2a0713be6d55f5188-0.
INFO 03-02 00:34:24 [logger.py:42] Received request cmpl-e93a2dbe0a0a4e7db12fd898796aab94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:24 [async_llm.py:261] Added request cmpl-e93a2dbe0a0a4e7db12fd898796aab94-0.
INFO 03-02 00:34:25 [logger.py:42] Received request cmpl-e00b2c5b9c984836b2e17437dac81a73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:25 [async_llm.py:261] Added request cmpl-e00b2c5b9c984836b2e17437dac81a73-0.
INFO 03-02 00:34:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:34:26 [logger.py:42] Received request cmpl-cb7b33df5e0a4ad199e1c5bc5b07d4fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:26 [async_llm.py:261] Added request cmpl-cb7b33df5e0a4ad199e1c5bc5b07d4fa-0.
INFO 03-02 00:34:27 [logger.py:42] Received request cmpl-68c629784f384cfaa18d5fe5b4bf585e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:27 [async_llm.py:261] Added request cmpl-68c629784f384cfaa18d5fe5b4bf585e-0.
INFO 03-02 00:34:29 [logger.py:42] Received request cmpl-5f9be04b49ea434fbebb12bacfd96379-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:29 [async_llm.py:261] Added request cmpl-5f9be04b49ea434fbebb12bacfd96379-0.
INFO 03-02 00:34:30 [logger.py:42] Received request cmpl-973dde70b59e47a6a5107c5e58e6c1d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:30 [async_llm.py:261] Added request cmpl-973dde70b59e47a6a5107c5e58e6c1d7-0.
INFO 03-02 00:34:31 [logger.py:42] Received request cmpl-a44731c2ab7246c9bbf61ad3c3291362-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:31 [async_llm.py:261] Added request cmpl-a44731c2ab7246c9bbf61ad3c3291362-0.
INFO 03-02 00:34:32 [logger.py:42] Received request cmpl-f81801f2497948839511b407843bce6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:32 [async_llm.py:261] Added request cmpl-f81801f2497948839511b407843bce6b-0.
INFO 03-02 00:34:33 [logger.py:42] Received request cmpl-3db3bbe82c5747479fefe492afca8dc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:33 [async_llm.py:261] Added request cmpl-3db3bbe82c5747479fefe492afca8dc3-0.
INFO 03-02 00:34:34 [logger.py:42] Received request cmpl-d9745526bcc74d2894db9fce6e72307b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:34 [async_llm.py:261] Added request cmpl-d9745526bcc74d2894db9fce6e72307b-0.
INFO 03-02 00:34:35 [logger.py:42] Received request cmpl-6b8e29d4551241be88e1366b83733f3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:35 [async_llm.py:261] Added request cmpl-6b8e29d4551241be88e1366b83733f3c-0.
INFO 03-02 00:34:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:34:36 [logger.py:42] Received request cmpl-755b0f17914c4fff8ebbc2d322e4321e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:36 [async_llm.py:261] Added request cmpl-755b0f17914c4fff8ebbc2d322e4321e-0.
INFO 03-02 00:34:37 [logger.py:42] Received request cmpl-c56682e334f9461cb924770c93db1c0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:37 [async_llm.py:261] Added request cmpl-c56682e334f9461cb924770c93db1c0a-0.
INFO 03-02 00:34:38 [logger.py:42] Received request cmpl-b4ce74a6c29545829d67c2038a9af6b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:38 [async_llm.py:261] Added request cmpl-b4ce74a6c29545829d67c2038a9af6b8-0.
INFO 03-02 00:34:39 [logger.py:42] Received request cmpl-6a5d97546ac742eca2e0a11e8f832e36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:39 [async_llm.py:261] Added request cmpl-6a5d97546ac742eca2e0a11e8f832e36-0.
INFO 03-02 00:34:40 [logger.py:42] Received request cmpl-bcbf106f728046599539d1b09f8c2dae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:40 [async_llm.py:261] Added request cmpl-bcbf106f728046599539d1b09f8c2dae-0.
INFO 03-02 00:34:42 [logger.py:42] Received request cmpl-a1ca687c9d194a339951905f669f6dca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:42 [async_llm.py:261] Added request cmpl-a1ca687c9d194a339951905f669f6dca-0.
INFO 03-02 00:34:43 [logger.py:42] Received request cmpl-f278c33b6ff5404a922bd859bc7a50b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:43 [async_llm.py:261] Added request cmpl-f278c33b6ff5404a922bd859bc7a50b3-0.
INFO 03-02 00:34:44 [logger.py:42] Received request cmpl-72f274dddbab4d4a801b21bc56708e30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:44 [async_llm.py:261] Added request cmpl-72f274dddbab4d4a801b21bc56708e30-0.
INFO 03-02 00:34:45 [logger.py:42] Received request cmpl-935744825ae04aefad0778fa899570a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:45 [async_llm.py:261] Added request cmpl-935744825ae04aefad0778fa899570a2-0.
INFO 03-02 00:34:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:34:46 [logger.py:42] Received request cmpl-016f7026483e48f88ecb1eb6246b376a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:46 [async_llm.py:261] Added request cmpl-016f7026483e48f88ecb1eb6246b376a-0.
INFO 03-02 00:34:47 [logger.py:42] Received request cmpl-359b5986a8ba418e9daa93c33c1e120c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:47 [async_llm.py:261] Added request cmpl-359b5986a8ba418e9daa93c33c1e120c-0.
INFO 03-02 00:34:48 [logger.py:42] Received request cmpl-d2bfc1febfe84caeb5eee0ed4787eed1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:48 [async_llm.py:261] Added request cmpl-d2bfc1febfe84caeb5eee0ed4787eed1-0.
INFO 03-02 00:34:49 [logger.py:42] Received request cmpl-11cd9b6114b54ecaa62d809deedf0295-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:49 [async_llm.py:261] Added request cmpl-11cd9b6114b54ecaa62d809deedf0295-0.
INFO 03-02 00:34:50 [logger.py:42] Received request cmpl-50e7a7eaab174191bef447df66a3ab4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:50 [async_llm.py:261] Added request cmpl-50e7a7eaab174191bef447df66a3ab4c-0.
INFO 03-02 00:34:51 [logger.py:42] Received request cmpl-50bbeec1b15245adacd84c5547be492a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:51 [async_llm.py:261] Added request cmpl-50bbeec1b15245adacd84c5547be492a-0.
INFO 03-02 00:34:52 [logger.py:42] Received request cmpl-2d157817ba0546ddaa49989dc6340a12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:52 [async_llm.py:261] Added request cmpl-2d157817ba0546ddaa49989dc6340a12-0.
INFO 03-02 00:34:53 [logger.py:42] Received request cmpl-b50ab397006a4a3d9b8f06fbdbd0c3fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:53 [async_llm.py:261] Added request cmpl-b50ab397006a4a3d9b8f06fbdbd0c3fe-0.
INFO 03-02 00:34:55 [logger.py:42] Received request cmpl-da34c531294c4b4381f3d5dd57fbdb1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:55 [async_llm.py:261] Added request cmpl-da34c531294c4b4381f3d5dd57fbdb1c-0.
INFO 03-02 00:34:56 [logger.py:42] Received request cmpl-c5db2694d53d499eb242ab8e56539b9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:56 [async_llm.py:261] Added request cmpl-c5db2694d53d499eb242ab8e56539b9d-0.
INFO 03-02 00:34:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:34:57 [logger.py:42] Received request cmpl-0eaa8c59523a4426bb83fe76c2dd86f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:57 [async_llm.py:261] Added request cmpl-0eaa8c59523a4426bb83fe76c2dd86f6-0.
INFO 03-02 00:34:58 [logger.py:42] Received request cmpl-a1f088f7eab247718cf4009fd76fe8c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:58 [async_llm.py:261] Added request cmpl-a1f088f7eab247718cf4009fd76fe8c9-0.
INFO 03-02 00:34:59 [logger.py:42] Received request cmpl-8de2176ea3224b45b252c74c05868498-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:59 [async_llm.py:261] Added request cmpl-8de2176ea3224b45b252c74c05868498-0.
INFO 03-02 00:35:00 [logger.py:42] Received request cmpl-d5f7e9783de646f797a13c8c45a8a33e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:00 [async_llm.py:261] Added request cmpl-d5f7e9783de646f797a13c8c45a8a33e-0.
INFO 03-02 00:35:01 [logger.py:42] Received request cmpl-48e7e91f48924cd59a9a2873ca78b927-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:01 [async_llm.py:261] Added request cmpl-48e7e91f48924cd59a9a2873ca78b927-0.
INFO 03-02 00:35:02 [logger.py:42] Received request cmpl-2f6f285826b5468599a1ef2e5d0adeed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:02 [async_llm.py:261] Added request cmpl-2f6f285826b5468599a1ef2e5d0adeed-0.
INFO 03-02 00:35:03 [logger.py:42] Received request cmpl-4b5dac28ad1d42ceb4f40ff466969f40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:03 [async_llm.py:261] Added request cmpl-4b5dac28ad1d42ceb4f40ff466969f40-0.
INFO 03-02 00:35:04 [logger.py:42] Received request cmpl-220b264437754cf28f01a4f18688d479-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:04 [async_llm.py:261] Added request cmpl-220b264437754cf28f01a4f18688d479-0.
INFO 03-02 00:35:05 [logger.py:42] Received request cmpl-b95b094156a1405f86762f8569c48bd2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:05 [async_llm.py:261] Added request cmpl-b95b094156a1405f86762f8569c48bd2-0.
INFO 03-02 00:35:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:35:06 [logger.py:42] Received request cmpl-a6e52453e75948dc82cfeb1ef8d49c34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:06 [async_llm.py:261] Added request cmpl-a6e52453e75948dc82cfeb1ef8d49c34-0.
INFO 03-02 00:35:08 [logger.py:42] Received request cmpl-f6745472edb74230a746cc34ba69927c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:08 [async_llm.py:261] Added request cmpl-f6745472edb74230a746cc34ba69927c-0.
INFO 03-02 00:35:09 [logger.py:42] Received request cmpl-4344be1a1d5c4b36ae164c267c1e32c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:09 [async_llm.py:261] Added request cmpl-4344be1a1d5c4b36ae164c267c1e32c3-0.
INFO 03-02 00:35:10 [logger.py:42] Received request cmpl-31e485ecb6b343e0b35e2aea62b10833-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:10 [async_llm.py:261] Added request cmpl-31e485ecb6b343e0b35e2aea62b10833-0.
INFO 03-02 00:35:11 [logger.py:42] Received request cmpl-3da97515de4b45cca43b83c4d1d3caef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:11 [async_llm.py:261] Added request cmpl-3da97515de4b45cca43b83c4d1d3caef-0.
INFO 03-02 00:35:12 [logger.py:42] Received request cmpl-49a31c5bed794a9c99121c8c0af0b122-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:12 [async_llm.py:261] Added request cmpl-49a31c5bed794a9c99121c8c0af0b122-0.
INFO 03-02 00:35:13 [logger.py:42] Received request cmpl-66a89d904ea2466cb2ae78ff9a48b1e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:13 [async_llm.py:261] Added request cmpl-66a89d904ea2466cb2ae78ff9a48b1e1-0.
INFO 03-02 00:35:14 [logger.py:42] Received request cmpl-9f44ded9dca944c2b026e31269594d5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:14 [async_llm.py:261] Added request cmpl-9f44ded9dca944c2b026e31269594d5f-0.
INFO 03-02 00:35:15 [logger.py:42] Received request cmpl-3dd7745fbafa48f690a825c7193fa3f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:15 [async_llm.py:261] Added request cmpl-3dd7745fbafa48f690a825c7193fa3f4-0.
INFO 03-02 00:35:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:35:16 [logger.py:42] Received request cmpl-09761b6e2fa948ac8045f8b96080ee2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:16 [async_llm.py:261] Added request cmpl-09761b6e2fa948ac8045f8b96080ee2d-0.
INFO 03-02 00:35:17 [logger.py:42] Received request cmpl-025c76cbcaae4517ac46a5f7ad2580d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:17 [async_llm.py:261] Added request cmpl-025c76cbcaae4517ac46a5f7ad2580d0-0.
INFO 03-02 00:35:18 [logger.py:42] Received request cmpl-9151d1141fbe4bf1b210391e7859d7cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:18 [async_llm.py:261] Added request cmpl-9151d1141fbe4bf1b210391e7859d7cb-0.
INFO 03-02 00:35:19 [logger.py:42] Received request cmpl-ea25eaff6a9f44d2afa7042fbd4cf75c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:20 [async_llm.py:261] Added request cmpl-ea25eaff6a9f44d2afa7042fbd4cf75c-0.
INFO 03-02 00:35:21 [logger.py:42] Received request cmpl-66e5d00f2ac24a5fb57c74a0728abf25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:21 [async_llm.py:261] Added request cmpl-66e5d00f2ac24a5fb57c74a0728abf25-0.
INFO 03-02 00:35:22 [logger.py:42] Received request cmpl-4234425a6c9547d6aaea4e99112a45be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:22 [async_llm.py:261] Added request cmpl-4234425a6c9547d6aaea4e99112a45be-0.
INFO 03-02 00:35:23 [logger.py:42] Received request cmpl-0e580116d49e4573a0ddbe6524bb7791-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:23 [async_llm.py:261] Added request cmpl-0e580116d49e4573a0ddbe6524bb7791-0.
INFO 03-02 00:35:24 [logger.py:42] Received request cmpl-1a905aa4ff6d4766af89efc36c99053e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:24 [async_llm.py:261] Added request cmpl-1a905aa4ff6d4766af89efc36c99053e-0.
INFO 03-02 00:35:25 [logger.py:42] Received request cmpl-6a4199135f594de99e17c74f73458543-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:25 [async_llm.py:261] Added request cmpl-6a4199135f594de99e17c74f73458543-0.
INFO 03-02 00:35:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:35:26 [logger.py:42] Received request cmpl-5402afe6141046978775e80a840f01ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:26 [async_llm.py:261] Added request cmpl-5402afe6141046978775e80a840f01ac-0.
INFO 03-02 00:35:27 [logger.py:42] Received request cmpl-870c24143e2a4be4891c87452e028bf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:27 [async_llm.py:261] Added request cmpl-870c24143e2a4be4891c87452e028bf4-0.
INFO 03-02 00:35:28 [logger.py:42] Received request cmpl-39e5a92102d9416faf5c5c007a92de8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:28 [async_llm.py:261] Added request cmpl-39e5a92102d9416faf5c5c007a92de8a-0.
INFO 03-02 00:35:29 [logger.py:42] Received request cmpl-eea60497af56413382ab68be8a8b100c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:29 [async_llm.py:261] Added request cmpl-eea60497af56413382ab68be8a8b100c-0.
INFO 03-02 00:35:30 [logger.py:42] Received request cmpl-528b910b2fcc4f9aa0186638b0ddd068-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:30 [async_llm.py:261] Added request cmpl-528b910b2fcc4f9aa0186638b0ddd068-0.
INFO 03-02 00:35:31 [logger.py:42] Received request cmpl-d10ab7abab944834a06da1ef6779e72e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:31 [async_llm.py:261] Added request cmpl-d10ab7abab944834a06da1ef6779e72e-0.
INFO 03-02 00:35:33 [logger.py:42] Received request cmpl-9c6830c068f448a3811765c849ceb04c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:33 [async_llm.py:261] Added request cmpl-9c6830c068f448a3811765c849ceb04c-0.
INFO 03-02 00:35:34 [logger.py:42] Received request cmpl-8ced7a4f275244d5be4c1bc80b06a718-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:34 [async_llm.py:261] Added request cmpl-8ced7a4f275244d5be4c1bc80b06a718-0.
INFO 03-02 00:35:35 [logger.py:42] Received request cmpl-81f1d7ee47cb4ed1825d9504db70364b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:35 [async_llm.py:261] Added request cmpl-81f1d7ee47cb4ed1825d9504db70364b-0.
INFO 03-02 00:35:36 [logger.py:42] Received request cmpl-b20c074daea74ace9de4833e88fc2df2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:36 [async_llm.py:261] Added request cmpl-b20c074daea74ace9de4833e88fc2df2-0.
INFO 03-02 00:35:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:35:37 [logger.py:42] Received request cmpl-0913d00c5102443abb88ee7eedfbf534-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:37 [async_llm.py:261] Added request cmpl-0913d00c5102443abb88ee7eedfbf534-0.
INFO 03-02 00:35:38 [logger.py:42] Received request cmpl-a1046fc2921641438b9136c6f230a3b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:38 [async_llm.py:261] Added request cmpl-a1046fc2921641438b9136c6f230a3b9-0.
INFO 03-02 00:35:39 [logger.py:42] Received request cmpl-43f7128f131c4cd7b25bb818fa219359-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:39 [async_llm.py:261] Added request cmpl-43f7128f131c4cd7b25bb818fa219359-0.
INFO 03-02 00:35:40 [logger.py:42] Received request cmpl-3c55bb14c0274e249b2022f4849e8b1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:40 [async_llm.py:261] Added request cmpl-3c55bb14c0274e249b2022f4849e8b1e-0.
INFO 03-02 00:35:41 [logger.py:42] Received request cmpl-6d71c46337184d33b946ab6651c22e83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:41 [async_llm.py:261] Added request cmpl-6d71c46337184d33b946ab6651c22e83-0.
INFO 03-02 00:35:42 [logger.py:42] Received request cmpl-408889707ec4443bab3d3780c0b8a93d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:42 [async_llm.py:261] Added request cmpl-408889707ec4443bab3d3780c0b8a93d-0.
INFO 03-02 00:35:43 [logger.py:42] Received request cmpl-b143da66c92840aaba95e2f24b4c66da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:43 [async_llm.py:261] Added request cmpl-b143da66c92840aaba95e2f24b4c66da-0.
INFO 03-02 00:35:44 [logger.py:42] Received request cmpl-ac486c4bdca34fb68432f20647d41ae7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:44 [async_llm.py:261] Added request cmpl-ac486c4bdca34fb68432f20647d41ae7-0.
INFO 03-02 00:35:46 [logger.py:42] Received request cmpl-a3d160daa3e743cd876298160b72c4c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:46 [async_llm.py:261] Added request cmpl-a3d160daa3e743cd876298160b72c4c0-0.
INFO 03-02 00:35:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:35:47 [logger.py:42] Received request cmpl-678a1d69cbf243fc8746cf3fb25fe5f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:47 [async_llm.py:261] Added request cmpl-678a1d69cbf243fc8746cf3fb25fe5f8-0.
INFO 03-02 00:35:48 [logger.py:42] Received request cmpl-1926f73f153b4b84b3757e5fed3c7e58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:48 [async_llm.py:261] Added request cmpl-1926f73f153b4b84b3757e5fed3c7e58-0.
INFO 03-02 00:35:49 [logger.py:42] Received request cmpl-d6e72d2039fe45308c7603c0c5d57986-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:49 [async_llm.py:261] Added request cmpl-d6e72d2039fe45308c7603c0c5d57986-0.
INFO 03-02 00:35:50 [logger.py:42] Received request cmpl-bdd61e8ec0e64e44b554395f75045f6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:50 [async_llm.py:261] Added request cmpl-bdd61e8ec0e64e44b554395f75045f6e-0.
INFO 03-02 00:35:51 [logger.py:42] Received request cmpl-31f7877e9fef48d48481be284af3488a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:51 [async_llm.py:261] Added request cmpl-31f7877e9fef48d48481be284af3488a-0.
INFO 03-02 00:35:52 [logger.py:42] Received request cmpl-1f6686c546bd4b61892e636839ed0ee2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:52 [async_llm.py:261] Added request cmpl-1f6686c546bd4b61892e636839ed0ee2-0.
INFO 03-02 00:35:53 [logger.py:42] Received request cmpl-33fee2dc191b4692a6b82ae9d1b8535a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:53 [async_llm.py:261] Added request cmpl-33fee2dc191b4692a6b82ae9d1b8535a-0.
INFO 03-02 00:35:54 [logger.py:42] Received request cmpl-eef7a8f00e884c86a39174cd4f3e1621-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:54 [async_llm.py:261] Added request cmpl-eef7a8f00e884c86a39174cd4f3e1621-0.
INFO 03-02 00:35:55 [logger.py:42] Received request cmpl-c92d7a315dcb4218ad5e7f719f257bd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:55 [async_llm.py:261] Added request cmpl-c92d7a315dcb4218ad5e7f719f257bd8-0.
INFO 03-02 00:35:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:35:56 [logger.py:42] Received request cmpl-8a9451eec0664c5bbaff44cb644a19ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:56 [async_llm.py:261] Added request cmpl-8a9451eec0664c5bbaff44cb644a19ee-0.
INFO 03-02 00:35:57 [logger.py:42] Received request cmpl-97443c6cfb5a4eb1a7f76a9df1806d28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:57 [async_llm.py:261] Added request cmpl-97443c6cfb5a4eb1a7f76a9df1806d28-0.
INFO 03-02 00:35:59 [logger.py:42] Received request cmpl-9d97a1eb6a8b4a2cbe77dae324fca097-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:59 [async_llm.py:261] Added request cmpl-9d97a1eb6a8b4a2cbe77dae324fca097-0.
INFO 03-02 00:36:00 [logger.py:42] Received request cmpl-c1802cd1b8a149f49c791fb40ca3348c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:00 [async_llm.py:261] Added request cmpl-c1802cd1b8a149f49c791fb40ca3348c-0.
INFO 03-02 00:36:01 [logger.py:42] Received request cmpl-0e94503953b84be4bee49682daf37f21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:01 [async_llm.py:261] Added request cmpl-0e94503953b84be4bee49682daf37f21-0.
INFO 03-02 00:36:02 [logger.py:42] Received request cmpl-22fced50981a40a9b8fa34210e29241b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:02 [async_llm.py:261] Added request cmpl-22fced50981a40a9b8fa34210e29241b-0.
INFO 03-02 00:36:03 [logger.py:42] Received request cmpl-5b4924492aa447a88e799abe93689eaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:03 [async_llm.py:261] Added request cmpl-5b4924492aa447a88e799abe93689eaf-0.
INFO 03-02 00:36:04 [logger.py:42] Received request cmpl-136b5c227d364d5da2c82aeec1dac071-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:04 [async_llm.py:261] Added request cmpl-136b5c227d364d5da2c82aeec1dac071-0.
INFO 03-02 00:36:05 [logger.py:42] Received request cmpl-3a1d81ebdda6405d8d3086582849cec5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:05 [async_llm.py:261] Added request cmpl-3a1d81ebdda6405d8d3086582849cec5-0.
INFO 03-02 00:36:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:36:06 [logger.py:42] Received request cmpl-1e6b80735f4943cfb1d356a7f1492c47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:06 [async_llm.py:261] Added request cmpl-1e6b80735f4943cfb1d356a7f1492c47-0.
INFO 03-02 00:36:07 [logger.py:42] Received request cmpl-87d311a93b6f4c43a4d793e399aa2140-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:07 [async_llm.py:261] Added request cmpl-87d311a93b6f4c43a4d793e399aa2140-0.
INFO 03-02 00:36:08 [logger.py:42] Received request cmpl-98312bf66e754441be0a24d23b7374bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:08 [async_llm.py:261] Added request cmpl-98312bf66e754441be0a24d23b7374bf-0.
INFO 03-02 00:36:09 [logger.py:42] Received request cmpl-1fd257292cc14d288801d08811fa5272-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:09 [async_llm.py:261] Added request cmpl-1fd257292cc14d288801d08811fa5272-0.
INFO 03-02 00:36:10 [logger.py:42] Received request cmpl-3892a288156546749f321a594a314e95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:10 [async_llm.py:261] Added request cmpl-3892a288156546749f321a594a314e95-0.
INFO 03-02 00:36:12 [logger.py:42] Received request cmpl-ec9ba18811ba43108aa66a4c2dcf7e53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:12 [async_llm.py:261] Added request cmpl-ec9ba18811ba43108aa66a4c2dcf7e53-0.
INFO 03-02 00:36:13 [logger.py:42] Received request cmpl-e1d7e00b735840b3a6f999faa3e0c685-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:13 [async_llm.py:261] Added request cmpl-e1d7e00b735840b3a6f999faa3e0c685-0.
INFO 03-02 00:36:14 [logger.py:42] Received request cmpl-16f53e9e7c6242e4b14c28a05451dc8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:14 [async_llm.py:261] Added request cmpl-16f53e9e7c6242e4b14c28a05451dc8e-0.
INFO 03-02 00:36:15 [logger.py:42] Received request cmpl-b8ea26503eef476aa7bcd29b03cc2775-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:15 [async_llm.py:261] Added request cmpl-b8ea26503eef476aa7bcd29b03cc2775-0.
INFO 03-02 00:36:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:36:16 [logger.py:42] Received request cmpl-47811f3ce88f4bef8acbb121f7638f25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:16 [async_llm.py:261] Added request cmpl-47811f3ce88f4bef8acbb121f7638f25-0.
INFO 03-02 00:36:17 [logger.py:42] Received request cmpl-e8f43ddf800c42b983d859ad36445741-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:17 [async_llm.py:261] Added request cmpl-e8f43ddf800c42b983d859ad36445741-0.
INFO 03-02 00:36:18 [logger.py:42] Received request cmpl-96dfbb9f62034f94bab56b5e0e8dc534-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:18 [async_llm.py:261] Added request cmpl-96dfbb9f62034f94bab56b5e0e8dc534-0.
INFO 03-02 00:36:19 [logger.py:42] Received request cmpl-3e5d169b3e4a4c03aae8be4994ebfce9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:19 [async_llm.py:261] Added request cmpl-3e5d169b3e4a4c03aae8be4994ebfce9-0.
INFO 03-02 00:36:20 [logger.py:42] Received request cmpl-db6c9a5a11d74c209f5358a7d77f3795-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:20 [async_llm.py:261] Added request cmpl-db6c9a5a11d74c209f5358a7d77f3795-0.
INFO 03-02 00:36:21 [logger.py:42] Received request cmpl-7057deccf2b246129eaa2f3d4e89767f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:21 [async_llm.py:261] Added request cmpl-7057deccf2b246129eaa2f3d4e89767f-0.
INFO 03-02 00:36:22 [logger.py:42] Received request cmpl-d6d6d388da2443e089b55fb43b1dfcb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:22 [async_llm.py:261] Added request cmpl-d6d6d388da2443e089b55fb43b1dfcb6-0.
INFO 03-02 00:36:23 [logger.py:42] Received request cmpl-99408f17d6ae4bc1877b6ad2aa00732d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:23 [async_llm.py:261] Added request cmpl-99408f17d6ae4bc1877b6ad2aa00732d-0.
INFO 03-02 00:36:25 [logger.py:42] Received request cmpl-f4895b2cd4f5455eb5ee63cc608bf9a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:25 [async_llm.py:261] Added request cmpl-f4895b2cd4f5455eb5ee63cc608bf9a5-0.
INFO 03-02 00:36:26 [logger.py:42] Received request cmpl-c49fc29b8ce840deac2da2e32bcc6f26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:26 [async_llm.py:261] Added request cmpl-c49fc29b8ce840deac2da2e32bcc6f26-0.
INFO 03-02 00:36:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:36:27 [logger.py:42] Received request cmpl-ddfcff99682d4e8095e2551cbacd37bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:27 [async_llm.py:261] Added request cmpl-ddfcff99682d4e8095e2551cbacd37bd-0.
INFO 03-02 00:36:28 [logger.py:42] Received request cmpl-57eb48864c304bd4a7b8d41644c4feea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:28 [async_llm.py:261] Added request cmpl-57eb48864c304bd4a7b8d41644c4feea-0.
INFO 03-02 00:36:29 [logger.py:42] Received request cmpl-028df9f2e6b64081b5742f59e6f92cd0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:29 [async_llm.py:261] Added request cmpl-028df9f2e6b64081b5742f59e6f92cd0-0.
INFO 03-02 00:36:30 [logger.py:42] Received request cmpl-6bf8a2ec42024ad1bfff0b13acae14c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:30 [async_llm.py:261] Added request cmpl-6bf8a2ec42024ad1bfff0b13acae14c3-0.
INFO 03-02 00:36:31 [logger.py:42] Received request cmpl-0b4710802d0740c2bb54e83338e6afe7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:31 [async_llm.py:261] Added request cmpl-0b4710802d0740c2bb54e83338e6afe7-0.
INFO 03-02 00:36:32 [logger.py:42] Received request cmpl-909825ba39da4f5db335291936bd7f72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:32 [async_llm.py:261] Added request cmpl-909825ba39da4f5db335291936bd7f72-0.
INFO 03-02 00:36:33 [logger.py:42] Received request cmpl-116cbe6ad80248bfaa2f870af0f19aed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:33 [async_llm.py:261] Added request cmpl-116cbe6ad80248bfaa2f870af0f19aed-0.
INFO 03-02 00:36:34 [logger.py:42] Received request cmpl-5805e16624f44029b1de254adef2a902-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:34 [async_llm.py:261] Added request cmpl-5805e16624f44029b1de254adef2a902-0.
INFO 03-02 00:36:35 [logger.py:42] Received request cmpl-19e272fa3e4348d3b227e419aba6b180-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:35 [async_llm.py:261] Added request cmpl-19e272fa3e4348d3b227e419aba6b180-0.
INFO 03-02 00:36:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:36:36 [logger.py:42] Received request cmpl-dd11f9a3ea044826bdd7293cbca1284e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:36 [async_llm.py:261] Added request cmpl-dd11f9a3ea044826bdd7293cbca1284e-0.
INFO 03-02 00:36:38 [logger.py:42] Received request cmpl-5245d26425a84da2a459153580a4afbc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:38 [async_llm.py:261] Added request cmpl-5245d26425a84da2a459153580a4afbc-0.
INFO 03-02 00:36:39 [logger.py:42] Received request cmpl-4045cb0434db4830b1f83ad373e442eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:39 [async_llm.py:261] Added request cmpl-4045cb0434db4830b1f83ad373e442eb-0.
INFO 03-02 00:36:40 [logger.py:42] Received request cmpl-9309d951b57d4c3dbcfb037270243f00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:40 [async_llm.py:261] Added request cmpl-9309d951b57d4c3dbcfb037270243f00-0.
INFO 03-02 00:36:41 [logger.py:42] Received request cmpl-15eeb7aee4234398964b7fd0a0f79f7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:41 [async_llm.py:261] Added request cmpl-15eeb7aee4234398964b7fd0a0f79f7e-0.
INFO 03-02 00:36:42 [logger.py:42] Received request cmpl-0c7c533c7d2a4150b1178f09ee8450d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:42 [async_llm.py:261] Added request cmpl-0c7c533c7d2a4150b1178f09ee8450d5-0.
INFO 03-02 00:36:43 [logger.py:42] Received request cmpl-5faf07326d87441cafd0f686a1607534-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:43 [async_llm.py:261] Added request cmpl-5faf07326d87441cafd0f686a1607534-0.
INFO 03-02 00:36:44 [logger.py:42] Received request cmpl-1165775f551847779d242dbc49f1936d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:44 [async_llm.py:261] Added request cmpl-1165775f551847779d242dbc49f1936d-0.
INFO 03-02 00:36:45 [logger.py:42] Received request cmpl-ecbcb1232ed946e5af2413bdc5fa14dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:45 [async_llm.py:261] Added request cmpl-ecbcb1232ed946e5af2413bdc5fa14dd-0.
INFO 03-02 00:36:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:36:46 [logger.py:42] Received request cmpl-5595bd3c73e848e486934ce726c8a5ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:46 [async_llm.py:261] Added request cmpl-5595bd3c73e848e486934ce726c8a5ae-0.
INFO 03-02 00:36:47 [logger.py:42] Received request cmpl-babf516636a44119886f6071c132475e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:47 [async_llm.py:261] Added request cmpl-babf516636a44119886f6071c132475e-0.
INFO 03-02 00:36:48 [logger.py:42] Received request cmpl-c99a8a277fec4eec9c0fde8b50197fc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:48 [async_llm.py:261] Added request cmpl-c99a8a277fec4eec9c0fde8b50197fc7-0.
INFO 03-02 00:36:49 [logger.py:42] Received request cmpl-d66541c1b42e4b218207db04c41e1307-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:49 [async_llm.py:261] Added request cmpl-d66541c1b42e4b218207db04c41e1307-0.
INFO 03-02 00:36:51 [logger.py:42] Received request cmpl-9ae29a1b935f4fed89fcc82b915b13a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:51 [async_llm.py:261] Added request cmpl-9ae29a1b935f4fed89fcc82b915b13a1-0.
INFO 03-02 00:36:52 [logger.py:42] Received request cmpl-9376702ef81e47d7820d91e55d4b347a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:52 [async_llm.py:261] Added request cmpl-9376702ef81e47d7820d91e55d4b347a-0.
INFO 03-02 00:36:53 [logger.py:42] Received request cmpl-5eb6b5f581c743c98047d5437aa81e3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:53 [async_llm.py:261] Added request cmpl-5eb6b5f581c743c98047d5437aa81e3a-0.
INFO 03-02 00:36:54 [logger.py:42] Received request cmpl-5ea354c0966f416b8ad1386183af5d60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:54 [async_llm.py:261] Added request cmpl-5ea354c0966f416b8ad1386183af5d60-0.
INFO 03-02 00:36:55 [logger.py:42] Received request cmpl-d272e4ea01654285bb90816e66bbd2f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:55 [async_llm.py:261] Added request cmpl-d272e4ea01654285bb90816e66bbd2f9-0.
INFO 03-02 00:36:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:36:56 [logger.py:42] Received request cmpl-1c63cd4e6f2d4bf1b3dd48370f9b7eb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:56 [async_llm.py:261] Added request cmpl-1c63cd4e6f2d4bf1b3dd48370f9b7eb6-0.
INFO 03-02 00:36:57 [logger.py:42] Received request cmpl-9ca7e1372b96433c8171a13d72686297-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:57 [async_llm.py:261] Added request cmpl-9ca7e1372b96433c8171a13d72686297-0.
INFO 03-02 00:36:58 [logger.py:42] Received request cmpl-3e0ec60831a14c05a0bdb494d1872335-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:58 [async_llm.py:261] Added request cmpl-3e0ec60831a14c05a0bdb494d1872335-0.
INFO 03-02 00:36:59 [logger.py:42] Received request cmpl-fcc5ab0d551447429b0e2c4c49dcb031-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:59 [async_llm.py:261] Added request cmpl-fcc5ab0d551447429b0e2c4c49dcb031-0.
INFO 03-02 00:37:00 [logger.py:42] Received request cmpl-bdd4432771364b1585ca583c80667814-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:00 [async_llm.py:261] Added request cmpl-bdd4432771364b1585ca583c80667814-0.
INFO 03-02 00:37:01 [logger.py:42] Received request cmpl-94fdbc9d05a94c12bb904260bb13027f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:01 [async_llm.py:261] Added request cmpl-94fdbc9d05a94c12bb904260bb13027f-0.
INFO 03-02 00:37:02 [logger.py:42] Received request cmpl-6312d5ae214f4a0bba9e618774fc1685-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:02 [async_llm.py:261] Added request cmpl-6312d5ae214f4a0bba9e618774fc1685-0.
INFO 03-02 00:37:04 [logger.py:42] Received request cmpl-ce5209d6f06c48f0a3c67504c8d63c98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:04 [async_llm.py:261] Added request cmpl-ce5209d6f06c48f0a3c67504c8d63c98-0.
INFO 03-02 00:37:05 [logger.py:42] Received request cmpl-e3345c3ebfb745e798640257011c6fb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:05 [async_llm.py:261] Added request cmpl-e3345c3ebfb745e798640257011c6fb3-0.
INFO 03-02 00:37:06 [logger.py:42] Received request cmpl-d6755e0b349d4d46a40d8f7f51ad5bb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:06 [async_llm.py:261] Added request cmpl-d6755e0b349d4d46a40d8f7f51ad5bb8-0.
INFO 03-02 00:37:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:37:07 [logger.py:42] Received request cmpl-1855f86adf9349d687f32bd9e070b4e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:07 [async_llm.py:261] Added request cmpl-1855f86adf9349d687f32bd9e070b4e7-0.
INFO 03-02 00:37:08 [logger.py:42] Received request cmpl-0de150720d9b4c099d53c583239f321f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:08 [async_llm.py:261] Added request cmpl-0de150720d9b4c099d53c583239f321f-0.
INFO 03-02 00:37:09 [logger.py:42] Received request cmpl-453bacb2e6b44201903ff4c73600978b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:09 [async_llm.py:261] Added request cmpl-453bacb2e6b44201903ff4c73600978b-0.
INFO 03-02 00:37:10 [logger.py:42] Received request cmpl-daaf777db3bf4645a8df3caebb0a611e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:10 [async_llm.py:261] Added request cmpl-daaf777db3bf4645a8df3caebb0a611e-0.
INFO 03-02 00:37:11 [logger.py:42] Received request cmpl-8f07bb020a854f86a0f78a6d76d7f78a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:11 [async_llm.py:261] Added request cmpl-8f07bb020a854f86a0f78a6d76d7f78a-0.
INFO 03-02 00:37:12 [logger.py:42] Received request cmpl-fc2c9874736c4f4c81323c73045a760f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:12 [async_llm.py:261] Added request cmpl-fc2c9874736c4f4c81323c73045a760f-0.
INFO 03-02 00:37:13 [logger.py:42] Received request cmpl-e5546d6ffaba436baa97a19ba6dbecd9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:13 [async_llm.py:261] Added request cmpl-e5546d6ffaba436baa97a19ba6dbecd9-0.
INFO 03-02 00:37:14 [logger.py:42] Received request cmpl-e9de4fcbc1af46bd90808dd0b1715624-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:14 [async_llm.py:261] Added request cmpl-e9de4fcbc1af46bd90808dd0b1715624-0.
INFO 03-02 00:37:15 [logger.py:42] Received request cmpl-cfc27012e8df492cbc0a442ee24aba7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:15 [async_llm.py:261] Added request cmpl-cfc27012e8df492cbc0a442ee24aba7e-0.
INFO 03-02 00:37:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:37:17 [logger.py:42] Received request cmpl-b693fa3eac0e42eeaf396d58ad3d7145-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:17 [async_llm.py:261] Added request cmpl-b693fa3eac0e42eeaf396d58ad3d7145-0.
INFO 03-02 00:37:18 [logger.py:42] Received request cmpl-248633bdbcae49f0985ec9c5653e8531-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:18 [async_llm.py:261] Added request cmpl-248633bdbcae49f0985ec9c5653e8531-0.
INFO 03-02 00:37:19 [logger.py:42] Received request cmpl-117df6255f2243249c9fd586fd0abde3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:19 [async_llm.py:261] Added request cmpl-117df6255f2243249c9fd586fd0abde3-0.
INFO 03-02 00:37:20 [logger.py:42] Received request cmpl-b6f221b0b255419b9b5b61032b7be01c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:20 [async_llm.py:261] Added request cmpl-b6f221b0b255419b9b5b61032b7be01c-0.
INFO 03-02 00:37:21 [logger.py:42] Received request cmpl-c95e0c98e21f4213a462f859ddf9e14f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:21 [async_llm.py:261] Added request cmpl-c95e0c98e21f4213a462f859ddf9e14f-0.
INFO 03-02 00:37:22 [logger.py:42] Received request cmpl-085171d6917d412f965320216bab1f83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:22 [async_llm.py:261] Added request cmpl-085171d6917d412f965320216bab1f83-0.
INFO 03-02 00:37:23 [logger.py:42] Received request cmpl-5263067eed1349d489bc308a17837daa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:23 [async_llm.py:261] Added request cmpl-5263067eed1349d489bc308a17837daa-0.
INFO 03-02 00:37:24 [logger.py:42] Received request cmpl-4003362a4afe4af5bc6d26c8696d858e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:24 [async_llm.py:261] Added request cmpl-4003362a4afe4af5bc6d26c8696d858e-0.
INFO 03-02 00:37:25 [logger.py:42] Received request cmpl-62dd13f47c4e45d994b7d248b8427590-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:25 [async_llm.py:261] Added request cmpl-62dd13f47c4e45d994b7d248b8427590-0.
INFO 03-02 00:37:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:37:26 [logger.py:42] Received request cmpl-44b32e0d8c7f4597a4343f5deb4fe67d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:26 [async_llm.py:261] Added request cmpl-44b32e0d8c7f4597a4343f5deb4fe67d-0.
INFO 03-02 00:37:27 [logger.py:42] Received request cmpl-db5293258ab542fc81262accdecf36bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:27 [async_llm.py:261] Added request cmpl-db5293258ab542fc81262accdecf36bc-0.
INFO 03-02 00:37:28 [logger.py:42] Received request cmpl-2d87abeb2d254ddd98c1b0730941932f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:28 [async_llm.py:261] Added request cmpl-2d87abeb2d254ddd98c1b0730941932f-0.
INFO 03-02 00:37:30 [logger.py:42] Received request cmpl-e248f20f248d4803a225015edfffb187-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:30 [async_llm.py:261] Added request cmpl-e248f20f248d4803a225015edfffb187-0.
INFO 03-02 00:37:31 [logger.py:42] Received request cmpl-7197f53135c24a9e81b1d416d3ed316c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:31 [async_llm.py:261] Added request cmpl-7197f53135c24a9e81b1d416d3ed316c-0.
INFO 03-02 00:37:32 [logger.py:42] Received request cmpl-34a280fd2f9443e7a2cc019d29eb76e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:32 [async_llm.py:261] Added request cmpl-34a280fd2f9443e7a2cc019d29eb76e2-0.
INFO 03-02 00:37:33 [logger.py:42] Received request cmpl-e25f5ed3b0b245148fea1253c5849296-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:33 [async_llm.py:261] Added request cmpl-e25f5ed3b0b245148fea1253c5849296-0.
INFO 03-02 00:37:34 [logger.py:42] Received request cmpl-346e216b283b4ee3a56d744617365cc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:34 [async_llm.py:261] Added request cmpl-346e216b283b4ee3a56d744617365cc7-0.
INFO 03-02 00:37:35 [logger.py:42] Received request cmpl-817954e534e74bfd83fa11ffed2ecbc4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:35 [async_llm.py:261] Added request cmpl-817954e534e74bfd83fa11ffed2ecbc4-0.
INFO 03-02 00:37:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:37:36 [logger.py:42] Received request cmpl-7ff7d1301f4a41868ff3c0951039f09e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:36 [async_llm.py:261] Added request cmpl-7ff7d1301f4a41868ff3c0951039f09e-0.
INFO 03-02 00:37:37 [logger.py:42] Received request cmpl-007f027567174cc78bc40143d1294f5c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:37 [async_llm.py:261] Added request cmpl-007f027567174cc78bc40143d1294f5c-0.
INFO 03-02 00:37:38 [logger.py:42] Received request cmpl-4c7da553ec3144179c7fca8507a86e5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:38 [async_llm.py:261] Added request cmpl-4c7da553ec3144179c7fca8507a86e5f-0.
INFO 03-02 00:37:39 [logger.py:42] Received request cmpl-16ba113938e34cdeb969349eb2a418f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:39 [async_llm.py:261] Added request cmpl-16ba113938e34cdeb969349eb2a418f0-0.
INFO 03-02 00:37:40 [logger.py:42] Received request cmpl-251585b2fbc24d13a366f2885f4ef601-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:40 [async_llm.py:261] Added request cmpl-251585b2fbc24d13a366f2885f4ef601-0.
INFO 03-02 00:37:41 [logger.py:42] Received request cmpl-563461a466c64da18621f811bb75c30f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:41 [async_llm.py:261] Added request cmpl-563461a466c64da18621f811bb75c30f-0.
INFO 03-02 00:37:43 [logger.py:42] Received request cmpl-09ec5b132a0a4cc9a7de496fee7abb6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:43 [async_llm.py:261] Added request cmpl-09ec5b132a0a4cc9a7de496fee7abb6b-0.
INFO 03-02 00:37:44 [logger.py:42] Received request cmpl-2c1593b10e014419a7a57263192b6976-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:44 [async_llm.py:261] Added request cmpl-2c1593b10e014419a7a57263192b6976-0.
INFO 03-02 00:37:45 [logger.py:42] Received request cmpl-c0cafddc06814d44af10ce7d89314ddd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:45 [async_llm.py:261] Added request cmpl-c0cafddc06814d44af10ce7d89314ddd-0.
INFO 03-02 00:37:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:37:46 [logger.py:42] Received request cmpl-608fccd0107f4b51ad141afac64aea26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:46 [async_llm.py:261] Added request cmpl-608fccd0107f4b51ad141afac64aea26-0.
INFO 03-02 00:37:47 [logger.py:42] Received request cmpl-c249b53763524a90bbae225d719d1377-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:47 [async_llm.py:261] Added request cmpl-c249b53763524a90bbae225d719d1377-0.
INFO 03-02 00:37:48 [logger.py:42] Received request cmpl-6ccb8d60e37c459b8fdfec4ffc698256-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:48 [async_llm.py:261] Added request cmpl-6ccb8d60e37c459b8fdfec4ffc698256-0.
INFO 03-02 00:37:49 [logger.py:42] Received request cmpl-b8d6747f93ff41068f032348e0074994-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:49 [async_llm.py:261] Added request cmpl-b8d6747f93ff41068f032348e0074994-0.
INFO 03-02 00:37:50 [logger.py:42] Received request cmpl-e77e36792cf04be8beda7422d4349977-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:50 [async_llm.py:261] Added request cmpl-e77e36792cf04be8beda7422d4349977-0.
INFO 03-02 00:37:51 [logger.py:42] Received request cmpl-32aa096010de4076ae3713e0f4e6676b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:51 [async_llm.py:261] Added request cmpl-32aa096010de4076ae3713e0f4e6676b-0.
INFO 03-02 00:37:52 [logger.py:42] Received request cmpl-6ebee945e8e54665b569d183af8c824e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:52 [async_llm.py:261] Added request cmpl-6ebee945e8e54665b569d183af8c824e-0.
INFO 03-02 00:37:53 [logger.py:42] Received request cmpl-165bb47cd28440819a35fe24d41a8930-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:53 [async_llm.py:261] Added request cmpl-165bb47cd28440819a35fe24d41a8930-0.
INFO 03-02 00:37:55 [logger.py:42] Received request cmpl-735890fece67463cba62b2f59f487ae0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:55 [async_llm.py:261] Added request cmpl-735890fece67463cba62b2f59f487ae0-0.
INFO 03-02 00:37:56 [logger.py:42] Received request cmpl-6f7d1adf3b5b4a4bb141270bbdf5d9ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:56 [async_llm.py:261] Added request cmpl-6f7d1adf3b5b4a4bb141270bbdf5d9ca-0.
INFO 03-02 00:37:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:37:57 [logger.py:42] Received request cmpl-f09a505048cc463e96e0842b934ade08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:57 [async_llm.py:261] Added request cmpl-f09a505048cc463e96e0842b934ade08-0.
INFO 03-02 00:37:58 [logger.py:42] Received request cmpl-cae025dbf643406d8fcaaeefb0288276-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:58 [async_llm.py:261] Added request cmpl-cae025dbf643406d8fcaaeefb0288276-0.
INFO 03-02 00:37:59 [logger.py:42] Received request cmpl-9a4123eac17c489b801e800715a4dead-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:59 [async_llm.py:261] Added request cmpl-9a4123eac17c489b801e800715a4dead-0.
INFO 03-02 00:38:00 [logger.py:42] Received request cmpl-30c23369da544b2eac8bc3eb0502622f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:00 [async_llm.py:261] Added request cmpl-30c23369da544b2eac8bc3eb0502622f-0.
INFO 03-02 00:38:01 [logger.py:42] Received request cmpl-68119b77a18a48058914fb28172b9f36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:01 [async_llm.py:261] Added request cmpl-68119b77a18a48058914fb28172b9f36-0.
INFO 03-02 00:38:02 [logger.py:42] Received request cmpl-cefa8fa689ad475c80af39e348adbff2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:02 [async_llm.py:261] Added request cmpl-cefa8fa689ad475c80af39e348adbff2-0.
INFO 03-02 00:38:03 [logger.py:42] Received request cmpl-9260853ff2634034b84a0408144bbf23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:03 [async_llm.py:261] Added request cmpl-9260853ff2634034b84a0408144bbf23-0.
INFO 03-02 00:38:04 [logger.py:42] Received request cmpl-7f176f2253904957b4a9682eda5ba9bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:04 [async_llm.py:261] Added request cmpl-7f176f2253904957b4a9682eda5ba9bf-0.
INFO 03-02 00:38:05 [logger.py:42] Received request cmpl-fd95ac9727724bfcb7fb5ee6654705d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:05 [async_llm.py:261] Added request cmpl-fd95ac9727724bfcb7fb5ee6654705d7-0.
INFO 03-02 00:38:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:38:06 [logger.py:42] Received request cmpl-37527054548248b3b9642ea7aa3cca6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:06 [async_llm.py:261] Added request cmpl-37527054548248b3b9642ea7aa3cca6b-0.
INFO 03-02 00:38:08 [logger.py:42] Received request cmpl-6d7486cd39d0441689adc89128be60d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:08 [async_llm.py:261] Added request cmpl-6d7486cd39d0441689adc89128be60d2-0.
INFO 03-02 00:38:09 [logger.py:42] Received request cmpl-53dd2e830cdb4d098aa4e2c4e8ee203c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:09 [async_llm.py:261] Added request cmpl-53dd2e830cdb4d098aa4e2c4e8ee203c-0.
INFO 03-02 00:38:10 [logger.py:42] Received request cmpl-176fbd38cf91466fb77b250c8625f799-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:10 [async_llm.py:261] Added request cmpl-176fbd38cf91466fb77b250c8625f799-0.
INFO 03-02 00:38:11 [logger.py:42] Received request cmpl-aa94bef01f90422f8f2f20babd3ecd00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:11 [async_llm.py:261] Added request cmpl-aa94bef01f90422f8f2f20babd3ecd00-0.
INFO 03-02 00:38:12 [logger.py:42] Received request cmpl-c52adc5edce546ca81a70801fde89b2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:12 [async_llm.py:261] Added request cmpl-c52adc5edce546ca81a70801fde89b2d-0.
INFO 03-02 00:38:13 [logger.py:42] Received request cmpl-7844ff2307b8469e8ded99e857d8df6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:13 [async_llm.py:261] Added request cmpl-7844ff2307b8469e8ded99e857d8df6d-0.
INFO 03-02 00:38:14 [logger.py:42] Received request cmpl-14ade2a3ebbc40bca7a8a8a47e55b1f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:14 [async_llm.py:261] Added request cmpl-14ade2a3ebbc40bca7a8a8a47e55b1f0-0.
INFO 03-02 00:38:15 [logger.py:42] Received request cmpl-46e7f029bdd046d8898d2fed984db96b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:15 [async_llm.py:261] Added request cmpl-46e7f029bdd046d8898d2fed984db96b-0.
INFO 03-02 00:38:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:38:16 [logger.py:42] Received request cmpl-0c5d6f447311477387724fb97ff65b42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:16 [async_llm.py:261] Added request cmpl-0c5d6f447311477387724fb97ff65b42-0.
INFO 03-02 00:38:17 [logger.py:42] Received request cmpl-44e5a80c66bf4a21a1714663907acb25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:17 [async_llm.py:261] Added request cmpl-44e5a80c66bf4a21a1714663907acb25-0.
INFO 03-02 00:38:18 [logger.py:42] Received request cmpl-add951ab629842b687a6d7d8066008bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:18 [async_llm.py:261] Added request cmpl-add951ab629842b687a6d7d8066008bc-0.
INFO 03-02 00:38:19 [logger.py:42] Received request cmpl-fbf66e3ee7a742a7a96beefd1afc68a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:19 [async_llm.py:261] Added request cmpl-fbf66e3ee7a742a7a96beefd1afc68a2-0.
INFO 03-02 00:38:21 [logger.py:42] Received request cmpl-614034302fb94694881550963109314b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:21 [async_llm.py:261] Added request cmpl-614034302fb94694881550963109314b-0.
INFO 03-02 00:38:22 [logger.py:42] Received request cmpl-970b5e0d445e4288bcd6a73abdee25b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:22 [async_llm.py:261] Added request cmpl-970b5e0d445e4288bcd6a73abdee25b5-0.
INFO 03-02 00:38:23 [logger.py:42] Received request cmpl-07a152d108d441df9840a16865cf1c6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:23 [async_llm.py:261] Added request cmpl-07a152d108d441df9840a16865cf1c6e-0.
INFO 03-02 00:38:24 [logger.py:42] Received request cmpl-4bca127ce35347c8aeb0495f7ac0b646-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:24 [async_llm.py:261] Added request cmpl-4bca127ce35347c8aeb0495f7ac0b646-0.
INFO 03-02 00:38:25 [logger.py:42] Received request cmpl-ba7c481d0695452892c51a48cd0c8021-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:25 [async_llm.py:261] Added request cmpl-ba7c481d0695452892c51a48cd0c8021-0.
INFO 03-02 00:38:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:38:26 [logger.py:42] Received request cmpl-48b38bbaaae840e0b7e71de1e375afbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:26 [async_llm.py:261] Added request cmpl-48b38bbaaae840e0b7e71de1e375afbe-0.
INFO 03-02 00:38:27 [logger.py:42] Received request cmpl-419ebc76707644dbb271ea6d65d8ecaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:27 [async_llm.py:261] Added request cmpl-419ebc76707644dbb271ea6d65d8ecaf-0.
INFO 03-02 00:38:28 [logger.py:42] Received request cmpl-270c4e4282de42a9baf88ccaa5eb669c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:28 [async_llm.py:261] Added request cmpl-270c4e4282de42a9baf88ccaa5eb669c-0.
INFO 03-02 00:38:29 [logger.py:42] Received request cmpl-3186f0034d3c46708ed7608944f13d2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:29 [async_llm.py:261] Added request cmpl-3186f0034d3c46708ed7608944f13d2d-0.
INFO 03-02 00:38:30 [logger.py:42] Received request cmpl-b7d90a5cf4ae44039cdbc3567327dfd9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:30 [async_llm.py:261] Added request cmpl-b7d90a5cf4ae44039cdbc3567327dfd9-0.
INFO 03-02 00:38:31 [logger.py:42] Received request cmpl-63d48c73bdfd49edbcfc219e57094c18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:31 [async_llm.py:261] Added request cmpl-63d48c73bdfd49edbcfc219e57094c18-0.
INFO 03-02 00:38:32 [logger.py:42] Received request cmpl-2ab0f3dd340744ac8d91edd421d2d46b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:32 [async_llm.py:261] Added request cmpl-2ab0f3dd340744ac8d91edd421d2d46b-0.
INFO 03-02 00:38:34 [logger.py:42] Received request cmpl-0a6823b9ba1c45679ea1c8ea1c90b34c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:34 [async_llm.py:261] Added request cmpl-0a6823b9ba1c45679ea1c8ea1c90b34c-0.
INFO 03-02 00:38:35 [logger.py:42] Received request cmpl-110c0d62ca8242d19b756ec7c166b2aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:35 [async_llm.py:261] Added request cmpl-110c0d62ca8242d19b756ec7c166b2aa-0.
INFO 03-02 00:38:36 [logger.py:42] Received request cmpl-513e60b49ed94d1583c72397bdee9b56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:36 [async_llm.py:261] Added request cmpl-513e60b49ed94d1583c72397bdee9b56-0.
INFO 03-02 00:38:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:38:37 [logger.py:42] Received request cmpl-bb1e8fe1a1c54e4795cf16ceca874d24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:37 [async_llm.py:261] Added request cmpl-bb1e8fe1a1c54e4795cf16ceca874d24-0.
INFO 03-02 00:38:38 [logger.py:42] Received request cmpl-7cfda28480f84ff9b7bbd873d6020525-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:38 [async_llm.py:261] Added request cmpl-7cfda28480f84ff9b7bbd873d6020525-0.
INFO 03-02 00:38:39 [logger.py:42] Received request cmpl-04716da6b7be4acaa2bb4201e4c17f89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:39 [async_llm.py:261] Added request cmpl-04716da6b7be4acaa2bb4201e4c17f89-0.
INFO 03-02 00:38:40 [logger.py:42] Received request cmpl-ca6369cab6554a0792c963dbb27a4711-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:40 [async_llm.py:261] Added request cmpl-ca6369cab6554a0792c963dbb27a4711-0.
INFO 03-02 00:38:41 [logger.py:42] Received request cmpl-d105aa28561847d6b9d388b835933247-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:41 [async_llm.py:261] Added request cmpl-d105aa28561847d6b9d388b835933247-0.
INFO 03-02 00:38:42 [logger.py:42] Received request cmpl-812fd82078bf43fabd70dfd307a9f2dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:42 [async_llm.py:261] Added request cmpl-812fd82078bf43fabd70dfd307a9f2dc-0.
INFO 03-02 00:38:43 [logger.py:42] Received request cmpl-b9357093cee8488f8e73a3d9f5e8c584-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:43 [async_llm.py:261] Added request cmpl-b9357093cee8488f8e73a3d9f5e8c584-0.
INFO 03-02 00:38:44 [logger.py:42] Received request cmpl-f2dc6ac628e945edb678f07dfdcaf6e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:44 [async_llm.py:261] Added request cmpl-f2dc6ac628e945edb678f07dfdcaf6e2-0.
INFO 03-02 00:38:45 [logger.py:42] Received request cmpl-bd18ecab28314b8a90d48fd32166c7a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:45 [async_llm.py:261] Added request cmpl-bd18ecab28314b8a90d48fd32166c7a0-0.
INFO 03-02 00:38:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:38:47 [logger.py:42] Received request cmpl-5fc20e4546a647b29734b7ee87acecaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:47 [async_llm.py:261] Added request cmpl-5fc20e4546a647b29734b7ee87acecaf-0.
INFO 03-02 00:38:48 [logger.py:42] Received request cmpl-7ede71ac76b04ed6843dabfa930d8d50-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:48 [async_llm.py:261] Added request cmpl-7ede71ac76b04ed6843dabfa930d8d50-0.
INFO 03-02 00:38:49 [logger.py:42] Received request cmpl-a34f754cfa644020b6bdb68d26f6043e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:49 [async_llm.py:261] Added request cmpl-a34f754cfa644020b6bdb68d26f6043e-0.
INFO 03-02 00:38:50 [logger.py:42] Received request cmpl-6159717d43894f52974d8e735722921a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:50 [async_llm.py:261] Added request cmpl-6159717d43894f52974d8e735722921a-0.
INFO 03-02 00:38:51 [logger.py:42] Received request cmpl-e7dc33f8c3a943ceb7d3b7c6f5c07e4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:51 [async_llm.py:261] Added request cmpl-e7dc33f8c3a943ceb7d3b7c6f5c07e4a-0.
INFO 03-02 00:38:52 [logger.py:42] Received request cmpl-95ef796d793b407d9ae8bfdebaf1995e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:52 [async_llm.py:261] Added request cmpl-95ef796d793b407d9ae8bfdebaf1995e-0.
INFO 03-02 00:38:53 [logger.py:42] Received request cmpl-30ef2a7b3be4457387ed9816196b5ac2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:53 [async_llm.py:261] Added request cmpl-30ef2a7b3be4457387ed9816196b5ac2-0.
INFO 03-02 00:38:54 [logger.py:42] Received request cmpl-2611b9e58abc41feaf9464033b34dbdc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:54 [async_llm.py:261] Added request cmpl-2611b9e58abc41feaf9464033b34dbdc-0.
INFO 03-02 00:38:55 [logger.py:42] Received request cmpl-c071c98193c144c785d6034f968c0095-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:55 [async_llm.py:261] Added request cmpl-c071c98193c144c785d6034f968c0095-0.
INFO 03-02 00:38:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:38:56 [logger.py:42] Received request cmpl-6db37cc8f4a043dfabc2f185c0f23417-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:56 [async_llm.py:261] Added request cmpl-6db37cc8f4a043dfabc2f185c0f23417-0.
INFO 03-02 00:38:57 [logger.py:42] Received request cmpl-26fad982daa74f7e95388697a96c4635-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:57 [async_llm.py:261] Added request cmpl-26fad982daa74f7e95388697a96c4635-0.
INFO 03-02 00:38:58 [logger.py:42] Received request cmpl-bd249e0083db4e6992691fa57ffe9504-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:58 [async_llm.py:261] Added request cmpl-bd249e0083db4e6992691fa57ffe9504-0.
INFO 03-02 00:39:00 [logger.py:42] Received request cmpl-899ccf53c15e4d8fa1a07275c7a081f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:00 [async_llm.py:261] Added request cmpl-899ccf53c15e4d8fa1a07275c7a081f1-0.
INFO 03-02 00:39:01 [logger.py:42] Received request cmpl-0b3be388c86342b799d11bdf206c6318-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:01 [async_llm.py:261] Added request cmpl-0b3be388c86342b799d11bdf206c6318-0.
INFO 03-02 00:39:02 [logger.py:42] Received request cmpl-eb26b727fc354dbfbc88509247f5cdb4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:02 [async_llm.py:261] Added request cmpl-eb26b727fc354dbfbc88509247f5cdb4-0.
INFO 03-02 00:39:03 [logger.py:42] Received request cmpl-212c8b9bb84f403083d36213782c0948-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:03 [async_llm.py:261] Added request cmpl-212c8b9bb84f403083d36213782c0948-0.
INFO 03-02 00:39:04 [logger.py:42] Received request cmpl-ac0c92cac0684f3694773884f5c5a98c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:04 [async_llm.py:261] Added request cmpl-ac0c92cac0684f3694773884f5c5a98c-0.
INFO 03-02 00:39:05 [logger.py:42] Received request cmpl-527b8307c0014501874124cdf8f2199e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:05 [async_llm.py:261] Added request cmpl-527b8307c0014501874124cdf8f2199e-0.
INFO 03-02 00:39:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:39:06 [logger.py:42] Received request cmpl-e9a7b23e14804d7884b8d2203ba0213d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:06 [async_llm.py:261] Added request cmpl-e9a7b23e14804d7884b8d2203ba0213d-0.
INFO 03-02 00:39:07 [logger.py:42] Received request cmpl-0fca68029a8141bba00b9eb47eff2009-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:07 [async_llm.py:261] Added request cmpl-0fca68029a8141bba00b9eb47eff2009-0.
INFO 03-02 00:39:08 [logger.py:42] Received request cmpl-cc6d0f259c544d0394359d7bff9287f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:08 [async_llm.py:261] Added request cmpl-cc6d0f259c544d0394359d7bff9287f5-0.
INFO 03-02 00:39:09 [logger.py:42] Received request cmpl-9a3508c6f15545c6928b6520744b3c90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:09 [async_llm.py:261] Added request cmpl-9a3508c6f15545c6928b6520744b3c90-0.
INFO 03-02 00:39:10 [logger.py:42] Received request cmpl-f3bde595408b4d0aa6f8606202c53ede-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:10 [async_llm.py:261] Added request cmpl-f3bde595408b4d0aa6f8606202c53ede-0.
INFO 03-02 00:39:11 [logger.py:42] Received request cmpl-caea9765ac8d47f6a669fe799e178d50-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:11 [async_llm.py:261] Added request cmpl-caea9765ac8d47f6a669fe799e178d50-0.
INFO 03-02 00:39:13 [logger.py:42] Received request cmpl-2cd53462bcd04a398562b6fb1adfc678-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:13 [async_llm.py:261] Added request cmpl-2cd53462bcd04a398562b6fb1adfc678-0.
INFO 03-02 00:39:14 [logger.py:42] Received request cmpl-ea7aa84bfe934789bf29e7a0a67d2b6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:14 [async_llm.py:261] Added request cmpl-ea7aa84bfe934789bf29e7a0a67d2b6a-0.
INFO 03-02 00:39:15 [logger.py:42] Received request cmpl-c626156075d840ef8d6416c3b0164130-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:15 [async_llm.py:261] Added request cmpl-c626156075d840ef8d6416c3b0164130-0.
INFO 03-02 00:39:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:39:16 [logger.py:42] Received request cmpl-6c842d1b40264781bf1871bd545c9130-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:16 [async_llm.py:261] Added request cmpl-6c842d1b40264781bf1871bd545c9130-0.
INFO 03-02 00:39:17 [logger.py:42] Received request cmpl-36b0079b505b4ed486fa3bab7afd51fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:17 [async_llm.py:261] Added request cmpl-36b0079b505b4ed486fa3bab7afd51fc-0.
INFO 03-02 00:39:18 [logger.py:42] Received request cmpl-5ca32d586c1a4219aba14a428e183d34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:18 [async_llm.py:261] Added request cmpl-5ca32d586c1a4219aba14a428e183d34-0.
INFO 03-02 00:39:19 [logger.py:42] Received request cmpl-d08b2ecc48604f1a913bf3c4677bb1fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:19 [async_llm.py:261] Added request cmpl-d08b2ecc48604f1a913bf3c4677bb1fe-0.
INFO 03-02 00:39:20 [logger.py:42] Received request cmpl-50e1869b16f04c7ca9dc3ebacee895d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:20 [async_llm.py:261] Added request cmpl-50e1869b16f04c7ca9dc3ebacee895d6-0.
INFO 03-02 00:39:21 [logger.py:42] Received request cmpl-f94e2b9fb142455eacb94434a1713da4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:21 [async_llm.py:261] Added request cmpl-f94e2b9fb142455eacb94434a1713da4-0.
INFO 03-02 00:39:22 [logger.py:42] Received request cmpl-4e05de41b8444567b4546ac92144af89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:22 [async_llm.py:261] Added request cmpl-4e05de41b8444567b4546ac92144af89-0.
INFO 03-02 00:39:23 [logger.py:42] Received request cmpl-6de877da51304b7ca9cbeccb7ee46cb4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:23 [async_llm.py:261] Added request cmpl-6de877da51304b7ca9cbeccb7ee46cb4-0.
INFO 03-02 00:39:24 [logger.py:42] Received request cmpl-97a705a5ca0b48bdad8ab417270106ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:24 [async_llm.py:261] Added request cmpl-97a705a5ca0b48bdad8ab417270106ad-0.
INFO 03-02 00:39:26 [logger.py:42] Received request cmpl-2ba61062bd5248ea8aba2ff14a8a8891-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:26 [async_llm.py:261] Added request cmpl-2ba61062bd5248ea8aba2ff14a8a8891-0.
INFO 03-02 00:39:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:39:27 [logger.py:42] Received request cmpl-54b6fb3f8c2a4f3f9b47b97e523ce816-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:27 [async_llm.py:261] Added request cmpl-54b6fb3f8c2a4f3f9b47b97e523ce816-0.
INFO 03-02 00:39:28 [logger.py:42] Received request cmpl-34b920cfe15940b3a90512a827a96e9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:28 [async_llm.py:261] Added request cmpl-34b920cfe15940b3a90512a827a96e9e-0.
INFO 03-02 00:39:29 [logger.py:42] Received request cmpl-e7e23d451c7c4ce790be7c2fdb4ea654-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:29 [async_llm.py:261] Added request cmpl-e7e23d451c7c4ce790be7c2fdb4ea654-0.
INFO 03-02 00:39:30 [logger.py:42] Received request cmpl-72548778e2d74d1fada4d8eec3101aeb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:30 [async_llm.py:261] Added request cmpl-72548778e2d74d1fada4d8eec3101aeb-0.
INFO 03-02 00:39:31 [logger.py:42] Received request cmpl-b8753df2ad9a4d0bba7c509e4f11c7d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:31 [async_llm.py:261] Added request cmpl-b8753df2ad9a4d0bba7c509e4f11c7d9-0.
INFO 03-02 00:39:32 [logger.py:42] Received request cmpl-5172a9d5cda74780bfdfa77e1f0fbde5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:32 [async_llm.py:261] Added request cmpl-5172a9d5cda74780bfdfa77e1f0fbde5-0.
INFO 03-02 00:39:33 [logger.py:42] Received request cmpl-2bd3f3d9dc754d18b680e8d08e61e490-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:33 [async_llm.py:261] Added request cmpl-2bd3f3d9dc754d18b680e8d08e61e490-0.
INFO 03-02 00:39:34 [logger.py:42] Received request cmpl-519f7f83bd6241e8a9a4fa81b04cb4c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:34 [async_llm.py:261] Added request cmpl-519f7f83bd6241e8a9a4fa81b04cb4c7-0.
INFO 03-02 00:39:35 [logger.py:42] Received request cmpl-359ff888b3b14c5cac3bb88c1120f43f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:35 [async_llm.py:261] Added request cmpl-359ff888b3b14c5cac3bb88c1120f43f-0.
INFO 03-02 00:39:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:39:36 [logger.py:42] Received request cmpl-bfd73c61bcd04246a65860751db1e64c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:36 [async_llm.py:261] Added request cmpl-bfd73c61bcd04246a65860751db1e64c-0.
INFO 03-02 00:39:37 [logger.py:42] Received request cmpl-57f3a4e1813248acbf21ecfd77b312d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:37 [async_llm.py:261] Added request cmpl-57f3a4e1813248acbf21ecfd77b312d3-0.
INFO 03-02 00:39:39 [logger.py:42] Received request cmpl-a0277ee838744f3c9dd616a3ce583d9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:39 [async_llm.py:261] Added request cmpl-a0277ee838744f3c9dd616a3ce583d9c-0.
INFO 03-02 00:39:40 [logger.py:42] Received request cmpl-27b9c0ad3c4b4d1c996f1c1a16b16d7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:40 [async_llm.py:261] Added request cmpl-27b9c0ad3c4b4d1c996f1c1a16b16d7b-0.
INFO 03-02 00:39:41 [logger.py:42] Received request cmpl-dec4b4210cdc4d4aa6a1365fd8aff2e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:41 [async_llm.py:261] Added request cmpl-dec4b4210cdc4d4aa6a1365fd8aff2e3-0.
INFO 03-02 00:39:42 [logger.py:42] Received request cmpl-c9c874110c4c433b9c39c0d754d4b5d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:42 [async_llm.py:261] Added request cmpl-c9c874110c4c433b9c39c0d754d4b5d6-0.
INFO 03-02 00:39:43 [logger.py:42] Received request cmpl-451183d8adbe4ae2b4f1a3b10a6f2e7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:43 [async_llm.py:261] Added request cmpl-451183d8adbe4ae2b4f1a3b10a6f2e7d-0.
INFO 03-02 00:39:44 [logger.py:42] Received request cmpl-cb3e36ed023949bf82140e2ae30cab58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:44 [async_llm.py:261] Added request cmpl-cb3e36ed023949bf82140e2ae30cab58-0.
INFO 03-02 00:39:45 [logger.py:42] Received request cmpl-0a531dbc911447baa5cf646fa0d8f556-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:45 [async_llm.py:261] Added request cmpl-0a531dbc911447baa5cf646fa0d8f556-0.
INFO 03-02 00:39:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:39:46 [logger.py:42] Received request cmpl-a61c615d7a7d49669bc40fd8c5e4f1de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:46 [async_llm.py:261] Added request cmpl-a61c615d7a7d49669bc40fd8c5e4f1de-0.
INFO 03-02 00:39:47 [logger.py:42] Received request cmpl-5a699c964dc94a6587661dab95037a94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:47 [async_llm.py:261] Added request cmpl-5a699c964dc94a6587661dab95037a94-0.
INFO 03-02 00:39:48 [logger.py:42] Received request cmpl-10e1c07e13c744f18f1187d378a6e58c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:48 [async_llm.py:261] Added request cmpl-10e1c07e13c744f18f1187d378a6e58c-0.
INFO 03-02 00:39:49 [logger.py:42] Received request cmpl-f51209730baf482bba57d201a777f56e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:49 [async_llm.py:261] Added request cmpl-f51209730baf482bba57d201a777f56e-0.
INFO 03-02 00:39:50 [logger.py:42] Received request cmpl-b1cf44c8063c4539abafea365f0e2b0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:51 [async_llm.py:261] Added request cmpl-b1cf44c8063c4539abafea365f0e2b0f-0.
INFO 03-02 00:39:52 [logger.py:42] Received request cmpl-7d484bf6c4b949b3acc8fe88c3ae8d5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:52 [async_llm.py:261] Added request cmpl-7d484bf6c4b949b3acc8fe88c3ae8d5b-0.
INFO 03-02 00:39:53 [logger.py:42] Received request cmpl-a9cd49f69a2a42bba045570f129a2782-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:53 [async_llm.py:261] Added request cmpl-a9cd49f69a2a42bba045570f129a2782-0.
INFO 03-02 00:39:54 [logger.py:42] Received request cmpl-66a76f42074049dca79e7406a7978fb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:54 [async_llm.py:261] Added request cmpl-66a76f42074049dca79e7406a7978fb9-0.
INFO 03-02 00:39:55 [logger.py:42] Received request cmpl-8672613917194d8aa5d0d8e024715811-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:55 [async_llm.py:261] Added request cmpl-8672613917194d8aa5d0d8e024715811-0.
INFO 03-02 00:39:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:39:56 [logger.py:42] Received request cmpl-282a6c62fcc6456f83f0bc656b69d797-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:56 [async_llm.py:261] Added request cmpl-282a6c62fcc6456f83f0bc656b69d797-0.
INFO 03-02 00:39:57 [logger.py:42] Received request cmpl-5844a3a58c594afaaaf2c9f18a2a659f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:57 [async_llm.py:261] Added request cmpl-5844a3a58c594afaaaf2c9f18a2a659f-0.
INFO 03-02 00:39:58 [logger.py:42] Received request cmpl-fa9a1b9279d4451a8f2a88498440cf5c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:58 [async_llm.py:261] Added request cmpl-fa9a1b9279d4451a8f2a88498440cf5c-0.
INFO 03-02 00:39:59 [logger.py:42] Received request cmpl-29541253daee401987017f6fa1197a9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:59 [async_llm.py:261] Added request cmpl-29541253daee401987017f6fa1197a9e-0.
INFO 03-02 00:40:00 [logger.py:42] Received request cmpl-4e982500c84b42dabab620fb80197797-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:00 [async_llm.py:261] Added request cmpl-4e982500c84b42dabab620fb80197797-0.
INFO 03-02 00:40:01 [logger.py:42] Received request cmpl-92563444b0984e4fa769365de9c793a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:01 [async_llm.py:261] Added request cmpl-92563444b0984e4fa769365de9c793a2-0.
INFO 03-02 00:40:02 [logger.py:42] Received request cmpl-7e00be5220fe4ae48603bcd98b1820d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:02 [async_llm.py:261] Added request cmpl-7e00be5220fe4ae48603bcd98b1820d1-0.
INFO 03-02 00:40:04 [logger.py:42] Received request cmpl-6336e03071d54b9fba9413965e58fb49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:04 [async_llm.py:261] Added request cmpl-6336e03071d54b9fba9413965e58fb49-0.
INFO 03-02 00:40:05 [logger.py:42] Received request cmpl-ad93fd6a48c24301ad37e12a1e5dfda5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:05 [async_llm.py:261] Added request cmpl-ad93fd6a48c24301ad37e12a1e5dfda5-0.
INFO 03-02 00:40:06 [logger.py:42] Received request cmpl-90f8223cc090479ba1298f0542c5b0b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:06 [async_llm.py:261] Added request cmpl-90f8223cc090479ba1298f0542c5b0b9-0.
INFO 03-02 00:40:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:40:07 [logger.py:42] Received request cmpl-0d78c54ef77f46ae993e98c6f7aa12ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:07 [async_llm.py:261] Added request cmpl-0d78c54ef77f46ae993e98c6f7aa12ae-0.
INFO 03-02 00:40:08 [logger.py:42] Received request cmpl-a9a788926fba452fa186a46a53b38e2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:08 [async_llm.py:261] Added request cmpl-a9a788926fba452fa186a46a53b38e2f-0.
INFO 03-02 00:40:09 [logger.py:42] Received request cmpl-2316a1eb556940a1b626dfb339a319d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:09 [async_llm.py:261] Added request cmpl-2316a1eb556940a1b626dfb339a319d0-0.
INFO 03-02 00:40:10 [logger.py:42] Received request cmpl-984bf1e8323f4c56bd96083ad1fa543f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:10 [async_llm.py:261] Added request cmpl-984bf1e8323f4c56bd96083ad1fa543f-0.
INFO 03-02 00:40:11 [logger.py:42] Received request cmpl-4e4d4602dc7c4565a7a47dd2a702d3df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:11 [async_llm.py:261] Added request cmpl-4e4d4602dc7c4565a7a47dd2a702d3df-0.
INFO 03-02 00:40:12 [logger.py:42] Received request cmpl-5e870d9239a448ceaccece3cb70f7684-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:12 [async_llm.py:261] Added request cmpl-5e870d9239a448ceaccece3cb70f7684-0.
INFO 03-02 00:40:13 [logger.py:42] Received request cmpl-2fd1cf38f29349f78ea55662ce7922d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:13 [async_llm.py:261] Added request cmpl-2fd1cf38f29349f78ea55662ce7922d5-0.
INFO 03-02 00:40:14 [logger.py:42] Received request cmpl-f14ddd22b49e440ab616c8e9a0e046a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:14 [async_llm.py:261] Added request cmpl-f14ddd22b49e440ab616c8e9a0e046a9-0.
INFO 03-02 00:40:15 [logger.py:42] Received request cmpl-8914e5e1cf364feba95f5b0a04655d36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:15 [async_llm.py:261] Added request cmpl-8914e5e1cf364feba95f5b0a04655d36-0.
INFO 03-02 00:40:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:40:17 [logger.py:42] Received request cmpl-dbc594c9c7934b8bad0d6f2c234ea83e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:17 [async_llm.py:261] Added request cmpl-dbc594c9c7934b8bad0d6f2c234ea83e-0.
INFO 03-02 00:40:18 [logger.py:42] Received request cmpl-2c2f4c8992484c62b13a1fe297243ba7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:18 [async_llm.py:261] Added request cmpl-2c2f4c8992484c62b13a1fe297243ba7-0.
INFO 03-02 00:40:19 [logger.py:42] Received request cmpl-79b0b56cfedc48daa092102b7e61279c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:19 [async_llm.py:261] Added request cmpl-79b0b56cfedc48daa092102b7e61279c-0.
INFO 03-02 00:40:20 [logger.py:42] Received request cmpl-d86e340eb8df45eabb9094783350394a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:20 [async_llm.py:261] Added request cmpl-d86e340eb8df45eabb9094783350394a-0.
INFO 03-02 00:40:21 [logger.py:42] Received request cmpl-61b8376146684e48af3e350d55153ec7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:21 [async_llm.py:261] Added request cmpl-61b8376146684e48af3e350d55153ec7-0.
INFO 03-02 00:40:22 [logger.py:42] Received request cmpl-8da9661bd66243f98037a65e000ebea8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:22 [async_llm.py:261] Added request cmpl-8da9661bd66243f98037a65e000ebea8-0.
INFO 03-02 00:40:23 [logger.py:42] Received request cmpl-aef0e970009d487eb14bc5f4b2c915af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:23 [async_llm.py:261] Added request cmpl-aef0e970009d487eb14bc5f4b2c915af-0.
INFO 03-02 00:40:24 [logger.py:42] Received request cmpl-1d962df48b1240e5ab43068bb54db9fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:24 [async_llm.py:261] Added request cmpl-1d962df48b1240e5ab43068bb54db9fd-0.
INFO 03-02 00:40:25 [logger.py:42] Received request cmpl-5c80e9065c9a4f5f970676ba9f6fb0f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:25 [async_llm.py:261] Added request cmpl-5c80e9065c9a4f5f970676ba9f6fb0f0-0.
INFO 03-02 00:40:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:40:26 [logger.py:42] Received request cmpl-bd487b628fc54ec9bd9a51fa693cee75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:26 [async_llm.py:261] Added request cmpl-bd487b628fc54ec9bd9a51fa693cee75-0.
INFO 03-02 00:40:27 [logger.py:42] Received request cmpl-3c6ac5e3dd36412db6b158ff003942a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:27 [async_llm.py:261] Added request cmpl-3c6ac5e3dd36412db6b158ff003942a0-0.
INFO 03-02 00:40:28 [logger.py:42] Received request cmpl-ad8a49514b0348fc98aecddeff6d7f15-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:28 [async_llm.py:261] Added request cmpl-ad8a49514b0348fc98aecddeff6d7f15-0.
INFO 03-02 00:40:30 [logger.py:42] Received request cmpl-f15ceeaa35d444a797e71b94fabc1caf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:30 [async_llm.py:261] Added request cmpl-f15ceeaa35d444a797e71b94fabc1caf-0.
INFO 03-02 00:40:31 [logger.py:42] Received request cmpl-b038bf54eb034470894794bf88aeea56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:31 [async_llm.py:261] Added request cmpl-b038bf54eb034470894794bf88aeea56-0.
INFO 03-02 00:40:32 [logger.py:42] Received request cmpl-0c9c117f744d4bc88e49c5343dcb1c01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:32 [async_llm.py:261] Added request cmpl-0c9c117f744d4bc88e49c5343dcb1c01-0.
INFO 03-02 00:40:33 [logger.py:42] Received request cmpl-5aa70b9eaee944af9a7d0410e74e3a02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:33 [async_llm.py:261] Added request cmpl-5aa70b9eaee944af9a7d0410e74e3a02-0.
INFO 03-02 00:40:34 [logger.py:42] Received request cmpl-9fa71bcc521849e89e94b57bdc5a84f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:34 [async_llm.py:261] Added request cmpl-9fa71bcc521849e89e94b57bdc5a84f2-0.
INFO 03-02 00:40:35 [logger.py:42] Received request cmpl-52e6d1b9718a40bda2695c02e3f3a479-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:35 [async_llm.py:261] Added request cmpl-52e6d1b9718a40bda2695c02e3f3a479-0.
INFO 03-02 00:40:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:40:36 [logger.py:42] Received request cmpl-b3cbfca35ca146c2a2fc12e3b04fdb69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:36 [async_llm.py:261] Added request cmpl-b3cbfca35ca146c2a2fc12e3b04fdb69-0.
INFO 03-02 00:40:37 [logger.py:42] Received request cmpl-a9ddfe72a8c14948bc80d36c94df601a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:37 [async_llm.py:261] Added request cmpl-a9ddfe72a8c14948bc80d36c94df601a-0.
INFO 03-02 00:40:38 [logger.py:42] Received request cmpl-ab71d62669394c298aa2bf1a41e1b3c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:38 [async_llm.py:261] Added request cmpl-ab71d62669394c298aa2bf1a41e1b3c5-0.
INFO 03-02 00:40:39 [logger.py:42] Received request cmpl-91d3dd7cdda541b8a4ddac7d3a0b9e64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:39 [async_llm.py:261] Added request cmpl-91d3dd7cdda541b8a4ddac7d3a0b9e64-0.
INFO 03-02 00:40:40 [logger.py:42] Received request cmpl-153050e85b8f441ca6f64195ed6645dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:40 [async_llm.py:261] Added request cmpl-153050e85b8f441ca6f64195ed6645dc-0.
INFO 03-02 00:40:41 [logger.py:42] Received request cmpl-74acba1197184335b2029527242069b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:41 [async_llm.py:261] Added request cmpl-74acba1197184335b2029527242069b6-0.
INFO 03-02 00:40:43 [logger.py:42] Received request cmpl-059579ba1b1f4cac95c5cd3147820061-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:43 [async_llm.py:261] Added request cmpl-059579ba1b1f4cac95c5cd3147820061-0.
INFO 03-02 00:40:44 [logger.py:42] Received request cmpl-3d999abcc87f43c3ab5458c978e56875-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:44 [async_llm.py:261] Added request cmpl-3d999abcc87f43c3ab5458c978e56875-0.
INFO 03-02 00:40:45 [logger.py:42] Received request cmpl-2e51ca450d4d45dfa8effb650b10d33d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:45 [async_llm.py:261] Added request cmpl-2e51ca450d4d45dfa8effb650b10d33d-0.
INFO 03-02 00:40:46 [logger.py:42] Received request cmpl-885993405ea940fc9e0e6c18ad857335-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:46 [async_llm.py:261] Added request cmpl-885993405ea940fc9e0e6c18ad857335-0.
INFO 03-02 00:40:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:40:47 [logger.py:42] Received request cmpl-476daf1df9b247d088b0e0d57fdf4f9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:47 [async_llm.py:261] Added request cmpl-476daf1df9b247d088b0e0d57fdf4f9b-0.
INFO 03-02 00:40:48 [logger.py:42] Received request cmpl-5ed37d567d5d40c4b82f5e6f94c5ddea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:48 [async_llm.py:261] Added request cmpl-5ed37d567d5d40c4b82f5e6f94c5ddea-0.
INFO 03-02 00:40:49 [logger.py:42] Received request cmpl-24262c06aa2d432483ba33ffd6692d26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:49 [async_llm.py:261] Added request cmpl-24262c06aa2d432483ba33ffd6692d26-0.
INFO 03-02 00:40:50 [logger.py:42] Received request cmpl-ce3a9f9f77eb40fba38e525358582610-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:50 [async_llm.py:261] Added request cmpl-ce3a9f9f77eb40fba38e525358582610-0.
INFO 03-02 00:40:51 [logger.py:42] Received request cmpl-f959bc47cedd4516842fe10da024ca81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:51 [async_llm.py:261] Added request cmpl-f959bc47cedd4516842fe10da024ca81-0.
INFO 03-02 00:40:52 [logger.py:42] Received request cmpl-bb015f76a1534cbd8cdf3adc64e4ddfd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:52 [async_llm.py:261] Added request cmpl-bb015f76a1534cbd8cdf3adc64e4ddfd-0.
INFO 03-02 00:40:53 [logger.py:42] Received request cmpl-c8ab4e88d0f64a1fb18546a6d5c1321e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:53 [async_llm.py:261] Added request cmpl-c8ab4e88d0f64a1fb18546a6d5c1321e-0.
INFO 03-02 00:40:54 [logger.py:42] Received request cmpl-c251d3a8904143f89c20d159a12f6b18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:54 [async_llm.py:261] Added request cmpl-c251d3a8904143f89c20d159a12f6b18-0.
INFO 03-02 00:40:56 [logger.py:42] Received request cmpl-1d9f45c0c7564ddbb42aaaad5dc3f21d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:56 [async_llm.py:261] Added request cmpl-1d9f45c0c7564ddbb42aaaad5dc3f21d-0.
INFO 03-02 00:40:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:40:57 [logger.py:42] Received request cmpl-d027d5735e014be2a34f4803e724c54a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:57 [async_llm.py:261] Added request cmpl-d027d5735e014be2a34f4803e724c54a-0.
INFO 03-02 00:40:58 [logger.py:42] Received request cmpl-021a221ca6384aa690b822a2d61009bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:58 [async_llm.py:261] Added request cmpl-021a221ca6384aa690b822a2d61009bd-0.
INFO 03-02 00:40:59 [logger.py:42] Received request cmpl-f9fe1e3245e2451e9c56df9be1b28ef4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:59 [async_llm.py:261] Added request cmpl-f9fe1e3245e2451e9c56df9be1b28ef4-0.
INFO 03-02 00:41:00 [logger.py:42] Received request cmpl-b9d3fa3fdc8c495eba0b6a0bfe10a56b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:00 [async_llm.py:261] Added request cmpl-b9d3fa3fdc8c495eba0b6a0bfe10a56b-0.
INFO 03-02 00:41:01 [logger.py:42] Received request cmpl-85739fb1078a4a38af751d71086694d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:01 [async_llm.py:261] Added request cmpl-85739fb1078a4a38af751d71086694d9-0.
INFO 03-02 00:41:02 [logger.py:42] Received request cmpl-56c94852626d405dae2d65bba48fd43c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:02 [async_llm.py:261] Added request cmpl-56c94852626d405dae2d65bba48fd43c-0.
INFO 03-02 00:41:03 [logger.py:42] Received request cmpl-fa2067aca4cd44ac942628921d4cb58a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:03 [async_llm.py:261] Added request cmpl-fa2067aca4cd44ac942628921d4cb58a-0.
INFO 03-02 00:41:04 [logger.py:42] Received request cmpl-4c9e05cd0050473db0f3d242d240eb19-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:04 [async_llm.py:261] Added request cmpl-4c9e05cd0050473db0f3d242d240eb19-0.
INFO 03-02 00:41:05 [logger.py:42] Received request cmpl-284b87526844477da48e80357d5c1d1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:05 [async_llm.py:261] Added request cmpl-284b87526844477da48e80357d5c1d1a-0.
INFO 03-02 00:41:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:41:06 [logger.py:42] Received request cmpl-d512e8b653284cff867bd4a53a675a22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:06 [async_llm.py:261] Added request cmpl-d512e8b653284cff867bd4a53a675a22-0.
INFO 03-02 00:41:07 [logger.py:42] Received request cmpl-bf56d7711cde4b6da84c9760365bbf56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:07 [async_llm.py:261] Added request cmpl-bf56d7711cde4b6da84c9760365bbf56-0.
INFO 03-02 00:41:09 [logger.py:42] Received request cmpl-44c6f9c3077d43b0a26b17d9f25928e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:09 [async_llm.py:261] Added request cmpl-44c6f9c3077d43b0a26b17d9f25928e4-0.
INFO 03-02 00:41:10 [logger.py:42] Received request cmpl-9610fcae1913448ba14906114ef33392-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:10 [async_llm.py:261] Added request cmpl-9610fcae1913448ba14906114ef33392-0.
INFO 03-02 00:41:11 [logger.py:42] Received request cmpl-afb775fcff75479eb08e3e3d7a5782a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:11 [async_llm.py:261] Added request cmpl-afb775fcff75479eb08e3e3d7a5782a1-0.
INFO 03-02 00:41:12 [logger.py:42] Received request cmpl-5b925afd5c414c45ac28afbc014d0351-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:12 [async_llm.py:261] Added request cmpl-5b925afd5c414c45ac28afbc014d0351-0.
INFO 03-02 00:41:13 [logger.py:42] Received request cmpl-a1a904fd9df048de87e1ab4a6c3c3185-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:13 [async_llm.py:261] Added request cmpl-a1a904fd9df048de87e1ab4a6c3c3185-0.
INFO 03-02 00:41:14 [logger.py:42] Received request cmpl-eb8adb809ea34477b86773c027008142-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:14 [async_llm.py:261] Added request cmpl-eb8adb809ea34477b86773c027008142-0.
INFO 03-02 00:41:15 [logger.py:42] Received request cmpl-cdb11d4025004627942b31c8f7844c9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:15 [async_llm.py:261] Added request cmpl-cdb11d4025004627942b31c8f7844c9b-0.
INFO 03-02 00:41:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:41:16 [logger.py:42] Received request cmpl-4c1921a7eee14319be6245e4a01e9390-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:16 [async_llm.py:261] Added request cmpl-4c1921a7eee14319be6245e4a01e9390-0.
INFO 03-02 00:41:17 [logger.py:42] Received request cmpl-e9974b7e8ff943b98f0554c502ad07cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:17 [async_llm.py:261] Added request cmpl-e9974b7e8ff943b98f0554c502ad07cd-0.
INFO 03-02 00:41:18 [logger.py:42] Received request cmpl-e19e122b94774a9ab17d1158e37bac60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:18 [async_llm.py:261] Added request cmpl-e19e122b94774a9ab17d1158e37bac60-0.
INFO 03-02 00:41:19 [logger.py:42] Received request cmpl-1f11f7dc1d934c448db2e8e347094185-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:19 [async_llm.py:261] Added request cmpl-1f11f7dc1d934c448db2e8e347094185-0.
INFO 03-02 00:41:21 [logger.py:42] Received request cmpl-5b0c2f793339476aae4fc1b7d17c97e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:21 [async_llm.py:261] Added request cmpl-5b0c2f793339476aae4fc1b7d17c97e7-0.
INFO 03-02 00:41:22 [logger.py:42] Received request cmpl-1cde27e04997440f963a5dd2c9ebb6b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:22 [async_llm.py:261] Added request cmpl-1cde27e04997440f963a5dd2c9ebb6b9-0.
INFO 03-02 00:41:23 [logger.py:42] Received request cmpl-d417bb3e115840db931036edf27bed4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:23 [async_llm.py:261] Added request cmpl-d417bb3e115840db931036edf27bed4b-0.
INFO 03-02 00:41:24 [logger.py:42] Received request cmpl-0ad200a11d48485bab32ef08ace1243d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:24 [async_llm.py:261] Added request cmpl-0ad200a11d48485bab32ef08ace1243d-0.
INFO 03-02 00:41:25 [logger.py:42] Received request cmpl-c675249ff3f74cfea7977b8117f628b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:25 [async_llm.py:261] Added request cmpl-c675249ff3f74cfea7977b8117f628b0-0.
INFO 03-02 00:41:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:41:26 [logger.py:42] Received request cmpl-277f0cc259a948e3a4ab0bdd24985352-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:26 [async_llm.py:261] Added request cmpl-277f0cc259a948e3a4ab0bdd24985352-0.
INFO 03-02 00:41:27 [logger.py:42] Received request cmpl-4ebdf79bbe4f42b6928566d41525c2b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:27 [async_llm.py:261] Added request cmpl-4ebdf79bbe4f42b6928566d41525c2b5-0.
INFO 03-02 00:41:28 [logger.py:42] Received request cmpl-d881dc28292b4cab94420712a51087df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:28 [async_llm.py:261] Added request cmpl-d881dc28292b4cab94420712a51087df-0.
INFO 03-02 00:41:29 [logger.py:42] Received request cmpl-2a3dde6e31d04d4c9caaef65af3eb8da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:29 [async_llm.py:261] Added request cmpl-2a3dde6e31d04d4c9caaef65af3eb8da-0.
INFO 03-02 00:41:30 [logger.py:42] Received request cmpl-aab67c319cbe4b96b6836ca781c6a116-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:30 [async_llm.py:261] Added request cmpl-aab67c319cbe4b96b6836ca781c6a116-0.
INFO 03-02 00:41:31 [logger.py:42] Received request cmpl-b7d62680226041aba14757a3e5972a3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:31 [async_llm.py:261] Added request cmpl-b7d62680226041aba14757a3e5972a3f-0.
INFO 03-02 00:41:32 [logger.py:42] Received request cmpl-1d6fb52b6e2647a18ab01eb737d68692-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:32 [async_llm.py:261] Added request cmpl-1d6fb52b6e2647a18ab01eb737d68692-0.
INFO 03-02 00:41:34 [logger.py:42] Received request cmpl-12fe2b3b459a40229d6be826df0d2f94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:34 [async_llm.py:261] Added request cmpl-12fe2b3b459a40229d6be826df0d2f94-0.
INFO 03-02 00:41:35 [logger.py:42] Received request cmpl-c3a34863e9d54a01b1fb9e17d1eb6888-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:35 [async_llm.py:261] Added request cmpl-c3a34863e9d54a01b1fb9e17d1eb6888-0.
INFO 03-02 00:41:36 [logger.py:42] Received request cmpl-453ccd2cb65e4af69477de077b76afc0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:36 [async_llm.py:261] Added request cmpl-453ccd2cb65e4af69477de077b76afc0-0.
INFO 03-02 00:41:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:41:37 [logger.py:42] Received request cmpl-129ab38b7b9a4033b887261c2e121e08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:37 [async_llm.py:261] Added request cmpl-129ab38b7b9a4033b887261c2e121e08-0.
INFO 03-02 00:41:38 [logger.py:42] Received request cmpl-a29216c504ba4c908db1fafd860f110a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:38 [async_llm.py:261] Added request cmpl-a29216c504ba4c908db1fafd860f110a-0.
INFO 03-02 00:41:39 [logger.py:42] Received request cmpl-8cb3fb1411af49d3a1571de8004e3f25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:39 [async_llm.py:261] Added request cmpl-8cb3fb1411af49d3a1571de8004e3f25-0.
INFO 03-02 00:41:40 [logger.py:42] Received request cmpl-ad2110a1f23745a59539a039d9ad53a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:40 [async_llm.py:261] Added request cmpl-ad2110a1f23745a59539a039d9ad53a7-0.
INFO 03-02 00:41:41 [logger.py:42] Received request cmpl-d68f50450aea46d0aebb0c928be9caa0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:41 [async_llm.py:261] Added request cmpl-d68f50450aea46d0aebb0c928be9caa0-0.
INFO 03-02 00:41:42 [logger.py:42] Received request cmpl-86c5274bc0b84dd9a902fd59feeef9f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:42 [async_llm.py:261] Added request cmpl-86c5274bc0b84dd9a902fd59feeef9f8-0.
INFO 03-02 00:41:43 [logger.py:42] Received request cmpl-bf0df9b21d4f48fc80b9326ca66bad10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:43 [async_llm.py:261] Added request cmpl-bf0df9b21d4f48fc80b9326ca66bad10-0.
INFO 03-02 00:41:44 [logger.py:42] Received request cmpl-fa2c963027bf4b71a023fdd20c9764f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:44 [async_llm.py:261] Added request cmpl-fa2c963027bf4b71a023fdd20c9764f8-0.
INFO 03-02 00:41:45 [logger.py:42] Received request cmpl-a254e51f0fe54a859f2d62709aeafcc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:45 [async_llm.py:261] Added request cmpl-a254e51f0fe54a859f2d62709aeafcc9-0.
INFO 03-02 00:41:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:41:47 [logger.py:42] Received request cmpl-b08d5e419f6d44c0a98141ca68aff8b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:47 [async_llm.py:261] Added request cmpl-b08d5e419f6d44c0a98141ca68aff8b3-0.
INFO 03-02 00:41:48 [logger.py:42] Received request cmpl-d345a089572c432794688c34a7e1941c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:48 [async_llm.py:261] Added request cmpl-d345a089572c432794688c34a7e1941c-0.
INFO 03-02 00:41:49 [logger.py:42] Received request cmpl-38e218979c30475b8e142ef91a71dd24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:49 [async_llm.py:261] Added request cmpl-38e218979c30475b8e142ef91a71dd24-0.
INFO 03-02 00:41:50 [logger.py:42] Received request cmpl-8094e01a55274ce69aead0190b92101c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:50 [async_llm.py:261] Added request cmpl-8094e01a55274ce69aead0190b92101c-0.
INFO 03-02 00:41:51 [logger.py:42] Received request cmpl-b5e849cc0e6a4257b2cbd298b6253896-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:51 [async_llm.py:261] Added request cmpl-b5e849cc0e6a4257b2cbd298b6253896-0.
INFO 03-02 00:41:52 [logger.py:42] Received request cmpl-0a534b89f0044e82854a0e59e53cb42e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:52 [async_llm.py:261] Added request cmpl-0a534b89f0044e82854a0e59e53cb42e-0.
INFO 03-02 00:41:53 [logger.py:42] Received request cmpl-44e47e7a9cac4b1d8b23fb9b14e6bebf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:53 [async_llm.py:261] Added request cmpl-44e47e7a9cac4b1d8b23fb9b14e6bebf-0.
INFO 03-02 00:41:54 [logger.py:42] Received request cmpl-bf73824de9844eb18c29be674348427c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:54 [async_llm.py:261] Added request cmpl-bf73824de9844eb18c29be674348427c-0.
INFO 03-02 00:41:55 [logger.py:42] Received request cmpl-1a6a86fcf9714e3d96a508c1f12b55ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:55 [async_llm.py:261] Added request cmpl-1a6a86fcf9714e3d96a508c1f12b55ef-0.
INFO 03-02 00:41:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:41:56 [logger.py:42] Received request cmpl-30291e138165435889e2cf8dc3f12ec3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:56 [async_llm.py:261] Added request cmpl-30291e138165435889e2cf8dc3f12ec3-0.
INFO 03-02 00:41:57 [logger.py:42] Received request cmpl-cd741c1807dd47b8ab7693dbb9c71bef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:57 [async_llm.py:261] Added request cmpl-cd741c1807dd47b8ab7693dbb9c71bef-0.
INFO 03-02 00:41:58 [logger.py:42] Received request cmpl-86403d35fd044b0b80ab28bdee0388b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:58 [async_llm.py:261] Added request cmpl-86403d35fd044b0b80ab28bdee0388b4-0.
INFO 03-02 00:42:00 [logger.py:42] Received request cmpl-9f251061a4c846d5bfbe0af82f4e7571-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:00 [async_llm.py:261] Added request cmpl-9f251061a4c846d5bfbe0af82f4e7571-0.
INFO 03-02 00:42:01 [logger.py:42] Received request cmpl-2ce12a6d0294428099024c214c25c6cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:01 [async_llm.py:261] Added request cmpl-2ce12a6d0294428099024c214c25c6cd-0.
INFO 03-02 00:42:02 [logger.py:42] Received request cmpl-b1cef7ab33ce4baa8ec4bc939940a8e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:02 [async_llm.py:261] Added request cmpl-b1cef7ab33ce4baa8ec4bc939940a8e4-0.
INFO 03-02 00:42:03 [logger.py:42] Received request cmpl-560828f8387c4c4aba22c6285e255e0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:03 [async_llm.py:261] Added request cmpl-560828f8387c4c4aba22c6285e255e0f-0.
INFO 03-02 00:42:04 [logger.py:42] Received request cmpl-e0377f01997a47ed9b27a44db4a49745-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:04 [async_llm.py:261] Added request cmpl-e0377f01997a47ed9b27a44db4a49745-0.
INFO 03-02 00:42:05 [logger.py:42] Received request cmpl-5d6da0ac6c354f4391e5675b9cca1a07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:05 [async_llm.py:261] Added request cmpl-5d6da0ac6c354f4391e5675b9cca1a07-0.
INFO 03-02 00:42:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:42:06 [logger.py:42] Received request cmpl-8575c4e4b862472abf18c28d77ba76b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:06 [async_llm.py:261] Added request cmpl-8575c4e4b862472abf18c28d77ba76b4-0.
INFO 03-02 00:42:07 [logger.py:42] Received request cmpl-0a18afccaca64919a46e975b3cb66904-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:07 [async_llm.py:261] Added request cmpl-0a18afccaca64919a46e975b3cb66904-0.
INFO 03-02 00:42:08 [logger.py:42] Received request cmpl-6de798b9b0674d0eb54f4a66e2e9d2b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:08 [async_llm.py:261] Added request cmpl-6de798b9b0674d0eb54f4a66e2e9d2b6-0.
INFO 03-02 00:42:09 [logger.py:42] Received request cmpl-b8560a8f706c407bb8367c3fd0474b4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:09 [async_llm.py:261] Added request cmpl-b8560a8f706c407bb8367c3fd0474b4f-0.
INFO 03-02 00:42:10 [logger.py:42] Received request cmpl-002d50d6953b44a8b9e403183546993e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:10 [async_llm.py:261] Added request cmpl-002d50d6953b44a8b9e403183546993e-0.
INFO 03-02 00:42:12 [logger.py:42] Received request cmpl-fc89c70c0d7e4fbf9af7f1096e9a3c11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:12 [async_llm.py:261] Added request cmpl-fc89c70c0d7e4fbf9af7f1096e9a3c11-0.
INFO 03-02 00:42:13 [logger.py:42] Received request cmpl-93ecaf119011467c93c61277d5094a6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:13 [async_llm.py:261] Added request cmpl-93ecaf119011467c93c61277d5094a6f-0.
INFO 03-02 00:42:14 [logger.py:42] Received request cmpl-ea886e54698342ecbe53397a8e89f4e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:14 [async_llm.py:261] Added request cmpl-ea886e54698342ecbe53397a8e89f4e0-0.
INFO 03-02 00:42:15 [logger.py:42] Received request cmpl-4254813b640d4020a7096c0af6c07603-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:15 [async_llm.py:261] Added request cmpl-4254813b640d4020a7096c0af6c07603-0.
INFO 03-02 00:42:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:42:16 [logger.py:42] Received request cmpl-3c0af007f79e4447a838e14c2ce9e7c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:16 [async_llm.py:261] Added request cmpl-3c0af007f79e4447a838e14c2ce9e7c9-0.
INFO 03-02 00:42:17 [logger.py:42] Received request cmpl-caa107dfc6954ccfa07c5de10973c245-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:17 [async_llm.py:261] Added request cmpl-caa107dfc6954ccfa07c5de10973c245-0.
INFO 03-02 00:42:18 [logger.py:42] Received request cmpl-d98b6e8d7d464b4f91ea8cfd84d34037-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:18 [async_llm.py:261] Added request cmpl-d98b6e8d7d464b4f91ea8cfd84d34037-0.
INFO 03-02 00:42:19 [logger.py:42] Received request cmpl-9a2a0052e3e44f7bbf0bfa7206fb5a6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:19 [async_llm.py:261] Added request cmpl-9a2a0052e3e44f7bbf0bfa7206fb5a6a-0.
INFO 03-02 00:42:20 [logger.py:42] Received request cmpl-d6a9bb624efb49e5842bb5e085501575-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:20 [async_llm.py:261] Added request cmpl-d6a9bb624efb49e5842bb5e085501575-0.
INFO 03-02 00:42:21 [logger.py:42] Received request cmpl-4852defd481949f58666140e6b7de130-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:21 [async_llm.py:261] Added request cmpl-4852defd481949f58666140e6b7de130-0.
INFO 03-02 00:42:22 [logger.py:42] Received request cmpl-5dad97d0a8d447df99d6e42a8a9f24e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:22 [async_llm.py:261] Added request cmpl-5dad97d0a8d447df99d6e42a8a9f24e6-0.
INFO 03-02 00:42:23 [logger.py:42] Received request cmpl-c27ffb3d24aa4d82a533812b5e844f3e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:23 [async_llm.py:261] Added request cmpl-c27ffb3d24aa4d82a533812b5e844f3e-0.
INFO 03-02 00:42:25 [logger.py:42] Received request cmpl-dd7f3410fa7e4a7f8bcfa1153bdbd258-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:25 [async_llm.py:261] Added request cmpl-dd7f3410fa7e4a7f8bcfa1153bdbd258-0.
INFO 03-02 00:42:26 [logger.py:42] Received request cmpl-862cdca1588e4c06b5a8c41106459c13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:26 [async_llm.py:261] Added request cmpl-862cdca1588e4c06b5a8c41106459c13-0.
INFO 03-02 00:42:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:42:27 [logger.py:42] Received request cmpl-38abb0d1500d44c18b308dddeb5111b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:27 [async_llm.py:261] Added request cmpl-38abb0d1500d44c18b308dddeb5111b3-0.
INFO 03-02 00:42:28 [logger.py:42] Received request cmpl-dd17805dd9844f57b26d193bab56ff7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:28 [async_llm.py:261] Added request cmpl-dd17805dd9844f57b26d193bab56ff7e-0.
INFO 03-02 00:42:29 [logger.py:42] Received request cmpl-fa7b359ccdc148aaaca531a82ebe0d00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:29 [async_llm.py:261] Added request cmpl-fa7b359ccdc148aaaca531a82ebe0d00-0.
INFO 03-02 00:42:30 [logger.py:42] Received request cmpl-ca4597ccfeb54dbb98d471779536dfe9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:30 [async_llm.py:261] Added request cmpl-ca4597ccfeb54dbb98d471779536dfe9-0.
INFO 03-02 00:42:31 [logger.py:42] Received request cmpl-acee6519d8774182b24c3322d045bbf8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:31 [async_llm.py:261] Added request cmpl-acee6519d8774182b24c3322d045bbf8-0.
INFO 03-02 00:42:32 [logger.py:42] Received request cmpl-118beffbf6d84d26b9bd9e831b595943-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:32 [async_llm.py:261] Added request cmpl-118beffbf6d84d26b9bd9e831b595943-0.
INFO 03-02 00:42:33 [logger.py:42] Received request cmpl-d7f93419a351463086c285aa03bf5b68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:33 [async_llm.py:261] Added request cmpl-d7f93419a351463086c285aa03bf5b68-0.
INFO 03-02 00:42:34 [logger.py:42] Received request cmpl-69755c0ee64c42fab18f499ad7534e1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:34 [async_llm.py:261] Added request cmpl-69755c0ee64c42fab18f499ad7534e1e-0.
INFO 03-02 00:42:35 [logger.py:42] Received request cmpl-2a67438faf924cd389e4a28ceb822b6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:35 [async_llm.py:261] Added request cmpl-2a67438faf924cd389e4a28ceb822b6e-0.
INFO 03-02 00:42:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:42:36 [logger.py:42] Received request cmpl-37a6d11b86724945938a3c3810ba3c4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:36 [async_llm.py:261] Added request cmpl-37a6d11b86724945938a3c3810ba3c4a-0.
INFO 03-02 00:42:38 [logger.py:42] Received request cmpl-a7e35d782cdb4e3ebd77b7390fd572be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:38 [async_llm.py:261] Added request cmpl-a7e35d782cdb4e3ebd77b7390fd572be-0.
INFO 03-02 00:42:39 [logger.py:42] Received request cmpl-98d89b851f3147839dc34ab162226520-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:39 [async_llm.py:261] Added request cmpl-98d89b851f3147839dc34ab162226520-0.
INFO 03-02 00:42:40 [logger.py:42] Received request cmpl-021c6cc3d88e418ea80b9604e330cf6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:40 [async_llm.py:261] Added request cmpl-021c6cc3d88e418ea80b9604e330cf6d-0.
INFO 03-02 00:42:41 [logger.py:42] Received request cmpl-9b23d2e3d8bf45b0a498bdaaf5b7c12e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:41 [async_llm.py:261] Added request cmpl-9b23d2e3d8bf45b0a498bdaaf5b7c12e-0.
INFO 03-02 00:42:42 [logger.py:42] Received request cmpl-80d82088aee94118a87b96de2bfc69b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:42 [async_llm.py:261] Added request cmpl-80d82088aee94118a87b96de2bfc69b5-0.
INFO 03-02 00:42:43 [logger.py:42] Received request cmpl-509b5a685c3b4a1d86de9ae47be18f5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:43 [async_llm.py:261] Added request cmpl-509b5a685c3b4a1d86de9ae47be18f5f-0.
INFO 03-02 00:42:44 [logger.py:42] Received request cmpl-e3f0ba23aaf746aba4c767b889a63d1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:44 [async_llm.py:261] Added request cmpl-e3f0ba23aaf746aba4c767b889a63d1c-0.
INFO 03-02 00:42:45 [logger.py:42] Received request cmpl-8b909d89b861489d94fa3ae72295924a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:45 [async_llm.py:261] Added request cmpl-8b909d89b861489d94fa3ae72295924a-0.
INFO 03-02 00:42:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:42:46 [logger.py:42] Received request cmpl-2d96cf642d4a47f4a8dc149dcbb5d99c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:46 [async_llm.py:261] Added request cmpl-2d96cf642d4a47f4a8dc149dcbb5d99c-0.
INFO 03-02 00:42:47 [logger.py:42] Received request cmpl-db63aa330c2c45f988a5692a4293a094-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:47 [async_llm.py:261] Added request cmpl-db63aa330c2c45f988a5692a4293a094-0.
INFO 03-02 00:42:48 [logger.py:42] Received request cmpl-c306c57d692c4aba9e4c490dbc6d0f71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:48 [async_llm.py:261] Added request cmpl-c306c57d692c4aba9e4c490dbc6d0f71-0.
INFO 03-02 00:42:49 [logger.py:42] Received request cmpl-ae7b20701e094035884341c1fbf65428-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:49 [async_llm.py:261] Added request cmpl-ae7b20701e094035884341c1fbf65428-0.
INFO 03-02 00:42:51 [logger.py:42] Received request cmpl-fe2aba2e44f54b8abeb7c0a89b922187-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:51 [async_llm.py:261] Added request cmpl-fe2aba2e44f54b8abeb7c0a89b922187-0.
INFO 03-02 00:42:52 [logger.py:42] Received request cmpl-34be52a2cf644be4a728f5c3d98c113e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:52 [async_llm.py:261] Added request cmpl-34be52a2cf644be4a728f5c3d98c113e-0.
INFO 03-02 00:42:53 [logger.py:42] Received request cmpl-6efd9d180ce143abb481f7c349c4defb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:53 [async_llm.py:261] Added request cmpl-6efd9d180ce143abb481f7c349c4defb-0.
INFO 03-02 00:42:54 [logger.py:42] Received request cmpl-ec78adc785094acaa4edac79255dfe0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:54 [async_llm.py:261] Added request cmpl-ec78adc785094acaa4edac79255dfe0a-0.
INFO 03-02 00:42:55 [logger.py:42] Received request cmpl-997cdd823aae43eda860166a2e4e9802-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:55 [async_llm.py:261] Added request cmpl-997cdd823aae43eda860166a2e4e9802-0.
INFO 03-02 00:42:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:42:56 [logger.py:42] Received request cmpl-33d6b5e84f254f6ab6d78d4b9b2fce3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:56 [async_llm.py:261] Added request cmpl-33d6b5e84f254f6ab6d78d4b9b2fce3f-0.
INFO 03-02 00:42:57 [logger.py:42] Received request cmpl-0779ccb9d6da444fbee930e61c128684-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:57 [async_llm.py:261] Added request cmpl-0779ccb9d6da444fbee930e61c128684-0.
INFO 03-02 00:42:58 [logger.py:42] Received request cmpl-cbbdb7f4151d4783bebe52928f246edc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:58 [async_llm.py:261] Added request cmpl-cbbdb7f4151d4783bebe52928f246edc-0.
INFO 03-02 00:42:59 [logger.py:42] Received request cmpl-5b379163c3f446b28d85db152b29e6a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:59 [async_llm.py:261] Added request cmpl-5b379163c3f446b28d85db152b29e6a0-0.
INFO 03-02 00:43:00 [logger.py:42] Received request cmpl-9a7dff4d5c4a4153a72c5b700f8ddc01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:00 [async_llm.py:261] Added request cmpl-9a7dff4d5c4a4153a72c5b700f8ddc01-0.
INFO 03-02 00:43:01 [logger.py:42] Received request cmpl-526e650e84d84db199c6f15903b2c196-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:01 [async_llm.py:261] Added request cmpl-526e650e84d84db199c6f15903b2c196-0.
INFO 03-02 00:43:02 [logger.py:42] Received request cmpl-8f04f4ebb6ce49898f015b468c84ccff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:02 [async_llm.py:261] Added request cmpl-8f04f4ebb6ce49898f015b468c84ccff-0.
INFO 03-02 00:43:04 [logger.py:42] Received request cmpl-9dbfc82f3b184cc3b01bba8e4790275e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:04 [async_llm.py:261] Added request cmpl-9dbfc82f3b184cc3b01bba8e4790275e-0.
INFO 03-02 00:43:05 [logger.py:42] Received request cmpl-e7275cde891f43e38e6bcf6245e15e3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:05 [async_llm.py:261] Added request cmpl-e7275cde891f43e38e6bcf6245e15e3b-0.
INFO 03-02 00:43:06 [logger.py:42] Received request cmpl-b929ea27a2a446c7bdbd91d914c7dcae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:06 [async_llm.py:261] Added request cmpl-b929ea27a2a446c7bdbd91d914c7dcae-0.
INFO 03-02 00:43:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:43:07 [logger.py:42] Received request cmpl-529d0c0c51a6480f8ef1ec9e3b312940-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:07 [async_llm.py:261] Added request cmpl-529d0c0c51a6480f8ef1ec9e3b312940-0.
INFO 03-02 00:43:08 [logger.py:42] Received request cmpl-5dd45618284241ff96a12f8fddd996ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:08 [async_llm.py:261] Added request cmpl-5dd45618284241ff96a12f8fddd996ce-0.
INFO 03-02 00:43:09 [logger.py:42] Received request cmpl-ad0418be34fa4aeb8e44d5503f780388-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:09 [async_llm.py:261] Added request cmpl-ad0418be34fa4aeb8e44d5503f780388-0.
INFO 03-02 00:43:10 [logger.py:42] Received request cmpl-35af535e3c714cc8bf241fe886fd6b3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:10 [async_llm.py:261] Added request cmpl-35af535e3c714cc8bf241fe886fd6b3f-0.
INFO 03-02 00:43:11 [logger.py:42] Received request cmpl-9c42932c295b4009b108e1b35f0b4fd6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:11 [async_llm.py:261] Added request cmpl-9c42932c295b4009b108e1b35f0b4fd6-0.
INFO 03-02 00:43:12 [logger.py:42] Received request cmpl-66c747d0052e41ee9328f6c347fe599f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:12 [async_llm.py:261] Added request cmpl-66c747d0052e41ee9328f6c347fe599f-0.
INFO 03-02 00:43:13 [logger.py:42] Received request cmpl-867620cd16c94d9a8be71a8512a39b41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:13 [async_llm.py:261] Added request cmpl-867620cd16c94d9a8be71a8512a39b41-0.
INFO 03-02 00:43:14 [logger.py:42] Received request cmpl-94843c33f8e748b1b0a5f464612af1cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:14 [async_llm.py:261] Added request cmpl-94843c33f8e748b1b0a5f464612af1cb-0.
INFO 03-02 00:43:15 [logger.py:42] Received request cmpl-fb7bab21b743460896aa7fe2c41bda29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:15 [async_llm.py:261] Added request cmpl-fb7bab21b743460896aa7fe2c41bda29-0.
INFO 03-02 00:43:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:43:17 [logger.py:42] Received request cmpl-a4810dadea5a47ec80e4ec2e12edae7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:17 [async_llm.py:261] Added request cmpl-a4810dadea5a47ec80e4ec2e12edae7c-0.
INFO 03-02 00:43:18 [logger.py:42] Received request cmpl-50043024b59a4c3ea45fb33931a7660e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:18 [async_llm.py:261] Added request cmpl-50043024b59a4c3ea45fb33931a7660e-0.
INFO 03-02 00:43:19 [logger.py:42] Received request cmpl-d740fbe61bd646e8935d870c8c680f54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:19 [async_llm.py:261] Added request cmpl-d740fbe61bd646e8935d870c8c680f54-0.
INFO 03-02 00:43:20 [logger.py:42] Received request cmpl-57cdf8ee9b674eec9506e87a0b17e840-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:20 [async_llm.py:261] Added request cmpl-57cdf8ee9b674eec9506e87a0b17e840-0.
INFO 03-02 00:43:21 [logger.py:42] Received request cmpl-28d21e5645da4556862e06200425d694-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:21 [async_llm.py:261] Added request cmpl-28d21e5645da4556862e06200425d694-0.
INFO 03-02 00:43:22 [logger.py:42] Received request cmpl-3c289cc8e2494009be1876408f007682-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:22 [async_llm.py:261] Added request cmpl-3c289cc8e2494009be1876408f007682-0.
INFO 03-02 00:43:23 [logger.py:42] Received request cmpl-296a5dfe00d7450ab446dfbe8c60e714-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:23 [async_llm.py:261] Added request cmpl-296a5dfe00d7450ab446dfbe8c60e714-0.
INFO 03-02 00:43:24 [logger.py:42] Received request cmpl-79d9cb26c0dd4b24be28c72680b6522d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:24 [async_llm.py:261] Added request cmpl-79d9cb26c0dd4b24be28c72680b6522d-0.
INFO 03-02 00:43:25 [logger.py:42] Received request cmpl-7b17965a6e11460aaecabc1efbae07c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:25 [async_llm.py:261] Added request cmpl-7b17965a6e11460aaecabc1efbae07c9-0.
INFO 03-02 00:43:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:43:26 [logger.py:42] Received request cmpl-31f09eda458540ed8e033520216ae015-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:26 [async_llm.py:261] Added request cmpl-31f09eda458540ed8e033520216ae015-0.
INFO 03-02 00:43:27 [logger.py:42] Received request cmpl-8ec0926745754baaaa0c90020e76c409-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:27 [async_llm.py:261] Added request cmpl-8ec0926745754baaaa0c90020e76c409-0.
INFO 03-02 00:43:28 [logger.py:42] Received request cmpl-7aa92c707dbb42dfada8d1d5285c1d1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:28 [async_llm.py:261] Added request cmpl-7aa92c707dbb42dfada8d1d5285c1d1d-0.
INFO 03-02 00:43:30 [logger.py:42] Received request cmpl-4f2d4df522934145afb062e651a191d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:30 [async_llm.py:261] Added request cmpl-4f2d4df522934145afb062e651a191d2-0.
INFO 03-02 00:43:31 [logger.py:42] Received request cmpl-9fe8a84f5dc54efbb70cba300403c302-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:31 [async_llm.py:261] Added request cmpl-9fe8a84f5dc54efbb70cba300403c302-0.
INFO 03-02 00:43:32 [logger.py:42] Received request cmpl-2f6e40acf73c415abab2158e4a373ff1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:32 [async_llm.py:261] Added request cmpl-2f6e40acf73c415abab2158e4a373ff1-0.
INFO 03-02 00:43:33 [logger.py:42] Received request cmpl-49e1027ef7f64ae69b9a71034f363453-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:33 [async_llm.py:261] Added request cmpl-49e1027ef7f64ae69b9a71034f363453-0.
INFO 03-02 00:43:34 [logger.py:42] Received request cmpl-ab79b90d864b4e94ba3ff965020c2ef5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:34 [async_llm.py:261] Added request cmpl-ab79b90d864b4e94ba3ff965020c2ef5-0.
INFO 03-02 00:43:35 [logger.py:42] Received request cmpl-9aaf06846254485bac8da5e5bacae829-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:35 [async_llm.py:261] Added request cmpl-9aaf06846254485bac8da5e5bacae829-0.
INFO 03-02 00:43:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:43:36 [logger.py:42] Received request cmpl-340f55bb368349609aa93d67259cf65a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:36 [async_llm.py:261] Added request cmpl-340f55bb368349609aa93d67259cf65a-0.
INFO 03-02 00:43:37 [logger.py:42] Received request cmpl-aa9f3e0b829045119d6e821a83a8c12f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:37 [async_llm.py:261] Added request cmpl-aa9f3e0b829045119d6e821a83a8c12f-0.
INFO 03-02 00:43:38 [logger.py:42] Received request cmpl-8ee68358f8ba43f0bf091c13fe657390-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:38 [async_llm.py:261] Added request cmpl-8ee68358f8ba43f0bf091c13fe657390-0.
INFO 03-02 00:43:39 [logger.py:42] Received request cmpl-d275fa1f37104276b4bb4df5ae60a28b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:39 [async_llm.py:261] Added request cmpl-d275fa1f37104276b4bb4df5ae60a28b-0.
INFO 03-02 00:43:40 [logger.py:42] Received request cmpl-518e84fbed974cfb8f8d2cceba017baf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:40 [async_llm.py:261] Added request cmpl-518e84fbed974cfb8f8d2cceba017baf-0.
INFO 03-02 00:43:41 [logger.py:42] Received request cmpl-ae0fa22563fa4b819157acc09da3f44b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:41 [async_llm.py:261] Added request cmpl-ae0fa22563fa4b819157acc09da3f44b-0.
INFO 03-02 00:43:43 [logger.py:42] Received request cmpl-d5f39c63c046450ab1756115db78e7bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:43 [async_llm.py:261] Added request cmpl-d5f39c63c046450ab1756115db78e7bb-0.
INFO 03-02 00:43:44 [logger.py:42] Received request cmpl-8d6544cb54a14255bc78233d9bdc5c51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:44 [async_llm.py:261] Added request cmpl-8d6544cb54a14255bc78233d9bdc5c51-0.
INFO 03-02 00:43:45 [logger.py:42] Received request cmpl-f5d63018275446509ecb100f7e83981c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:45 [async_llm.py:261] Added request cmpl-f5d63018275446509ecb100f7e83981c-0.
INFO 03-02 00:43:46 [logger.py:42] Received request cmpl-8edf76f2d5dd4a499a3223c25f36a785-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:46 [async_llm.py:261] Added request cmpl-8edf76f2d5dd4a499a3223c25f36a785-0.
INFO 03-02 00:43:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:43:47 [logger.py:42] Received request cmpl-8503f05b24bd448194936f596eca32a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:47 [async_llm.py:261] Added request cmpl-8503f05b24bd448194936f596eca32a9-0.
INFO 03-02 00:43:48 [logger.py:42] Received request cmpl-6a15812d64dd42fe9df17efc451edae6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:48 [async_llm.py:261] Added request cmpl-6a15812d64dd42fe9df17efc451edae6-0.
INFO 03-02 00:43:49 [logger.py:42] Received request cmpl-8dcf052fb36945a79434c3106a63895a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:49 [async_llm.py:261] Added request cmpl-8dcf052fb36945a79434c3106a63895a-0.
INFO 03-02 00:43:50 [logger.py:42] Received request cmpl-ad138ae6ca174acbae89bccd8a8b4f29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:50 [async_llm.py:261] Added request cmpl-ad138ae6ca174acbae89bccd8a8b4f29-0.
INFO 03-02 00:43:51 [logger.py:42] Received request cmpl-7dfcfdc7c955463eb8a95665d5d41e9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:51 [async_llm.py:261] Added request cmpl-7dfcfdc7c955463eb8a95665d5d41e9b-0.
INFO 03-02 00:43:52 [logger.py:42] Received request cmpl-b63d00d16e3245c7947f2ff47cea87b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:52 [async_llm.py:261] Added request cmpl-b63d00d16e3245c7947f2ff47cea87b2-0.
INFO 03-02 00:43:53 [logger.py:42] Received request cmpl-6e68e0b47db347fe9ae9b4fe94ee2fbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:53 [async_llm.py:261] Added request cmpl-6e68e0b47db347fe9ae9b4fe94ee2fbd-0.
INFO 03-02 00:43:54 [logger.py:42] Received request cmpl-fe5a7472873847c282033f6e4654ed1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:54 [async_llm.py:261] Added request cmpl-fe5a7472873847c282033f6e4654ed1a-0.
INFO 03-02 00:43:56 [logger.py:42] Received request cmpl-caa1d0216364482796ca65461ff367a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:56 [async_llm.py:261] Added request cmpl-caa1d0216364482796ca65461ff367a2-0.
INFO 03-02 00:43:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:43:57 [logger.py:42] Received request cmpl-8ee62799fad74df0b98923b5d3aae4ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:57 [async_llm.py:261] Added request cmpl-8ee62799fad74df0b98923b5d3aae4ff-0.
INFO 03-02 00:43:58 [logger.py:42] Received request cmpl-07cfeaf6cf4d471aa5e291e9d9129c67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:58 [async_llm.py:261] Added request cmpl-07cfeaf6cf4d471aa5e291e9d9129c67-0.
INFO 03-02 00:43:59 [logger.py:42] Received request cmpl-134665b1ffe9493ba33f49e557030c35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:59 [async_llm.py:261] Added request cmpl-134665b1ffe9493ba33f49e557030c35-0.
INFO 03-02 00:44:00 [logger.py:42] Received request cmpl-0c49e4d3bf7246ffa414e866135b08a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:00 [async_llm.py:261] Added request cmpl-0c49e4d3bf7246ffa414e866135b08a7-0.
INFO 03-02 00:44:01 [logger.py:42] Received request cmpl-54f4baa644fe4d238d18fe4cdce8b2c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:01 [async_llm.py:261] Added request cmpl-54f4baa644fe4d238d18fe4cdce8b2c3-0.
INFO 03-02 00:44:02 [logger.py:42] Received request cmpl-e41830b5edb64a71a8eca7cd317bfb18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:02 [async_llm.py:261] Added request cmpl-e41830b5edb64a71a8eca7cd317bfb18-0.
INFO 03-02 00:44:03 [logger.py:42] Received request cmpl-c6a3fd6f76384179844a8ba12718565d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:03 [async_llm.py:261] Added request cmpl-c6a3fd6f76384179844a8ba12718565d-0.
INFO 03-02 00:44:04 [logger.py:42] Received request cmpl-8f5fcd58c97345a8a37ad9790342aeab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:04 [async_llm.py:261] Added request cmpl-8f5fcd58c97345a8a37ad9790342aeab-0.
INFO 03-02 00:44:05 [logger.py:42] Received request cmpl-498f3136f03d4ae0b955e72581949401-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:05 [async_llm.py:261] Added request cmpl-498f3136f03d4ae0b955e72581949401-0.
INFO 03-02 00:44:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:44:06 [logger.py:42] Received request cmpl-fdf86c75b5944d0f9011cb61ce5d3ab1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:06 [async_llm.py:261] Added request cmpl-fdf86c75b5944d0f9011cb61ce5d3ab1-0.
INFO 03-02 00:44:07 [logger.py:42] Received request cmpl-d19fdced15924b1789d463080a4a7504-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:07 [async_llm.py:261] Added request cmpl-d19fdced15924b1789d463080a4a7504-0.
INFO 03-02 00:44:09 [logger.py:42] Received request cmpl-51887237911c4bae86b70e7e2049270c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:09 [async_llm.py:261] Added request cmpl-51887237911c4bae86b70e7e2049270c-0.
INFO 03-02 00:44:10 [logger.py:42] Received request cmpl-a1096f3b27dd4178a5f4243af98ccded-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:10 [async_llm.py:261] Added request cmpl-a1096f3b27dd4178a5f4243af98ccded-0.
INFO 03-02 00:44:11 [logger.py:42] Received request cmpl-6d35dab3be91426dba4108fdd8210e56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:11 [async_llm.py:261] Added request cmpl-6d35dab3be91426dba4108fdd8210e56-0.
INFO 03-02 00:44:12 [logger.py:42] Received request cmpl-9a55f445b0064423b1ce6383570763b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:12 [async_llm.py:261] Added request cmpl-9a55f445b0064423b1ce6383570763b1-0.
INFO 03-02 00:44:13 [logger.py:42] Received request cmpl-379ddc2b86f14fb9a3cb569037b50ea2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:13 [async_llm.py:261] Added request cmpl-379ddc2b86f14fb9a3cb569037b50ea2-0.
INFO 03-02 00:44:14 [logger.py:42] Received request cmpl-43739e4a0cec4cf29653154316baca85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:14 [async_llm.py:261] Added request cmpl-43739e4a0cec4cf29653154316baca85-0.
INFO 03-02 00:44:15 [logger.py:42] Received request cmpl-388f8b01c552447280d5434f1abc4303-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:15 [async_llm.py:261] Added request cmpl-388f8b01c552447280d5434f1abc4303-0.
INFO 03-02 00:44:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:44:16 [logger.py:42] Received request cmpl-56be92ba5d8849498da7b0ce018fabc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:16 [async_llm.py:261] Added request cmpl-56be92ba5d8849498da7b0ce018fabc9-0.
INFO 03-02 00:44:17 [logger.py:42] Received request cmpl-24e1ab71eb03458dadc30c1c81826dde-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:17 [async_llm.py:261] Added request cmpl-24e1ab71eb03458dadc30c1c81826dde-0.
INFO 03-02 00:44:18 [logger.py:42] Received request cmpl-0aaeb8ca747448a7936f7f7d5731210e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:18 [async_llm.py:261] Added request cmpl-0aaeb8ca747448a7936f7f7d5731210e-0.
INFO 03-02 00:44:19 [logger.py:42] Received request cmpl-c4a5473911fc478fb5914e9bb5cb5226-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:19 [async_llm.py:261] Added request cmpl-c4a5473911fc478fb5914e9bb5cb5226-0.
INFO 03-02 00:44:20 [logger.py:42] Received request cmpl-506c2f58da2b4038971a172d31c4820b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:20 [async_llm.py:261] Added request cmpl-506c2f58da2b4038971a172d31c4820b-0.
INFO 03-02 00:44:22 [logger.py:42] Received request cmpl-2d3babf3a52f45809fcd5ee77ebdef12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:22 [async_llm.py:261] Added request cmpl-2d3babf3a52f45809fcd5ee77ebdef12-0.
INFO 03-02 00:44:23 [logger.py:42] Received request cmpl-a89cb1ef52dc464eb8cd4f18fccefb61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:23 [async_llm.py:261] Added request cmpl-a89cb1ef52dc464eb8cd4f18fccefb61-0.
INFO 03-02 00:44:24 [logger.py:42] Received request cmpl-bcc7110eae444ea8bad7351db3aca3fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:24 [async_llm.py:261] Added request cmpl-bcc7110eae444ea8bad7351db3aca3fb-0.
INFO 03-02 00:44:25 [logger.py:42] Received request cmpl-261030bdb3644cbc934c0d3ec2f1e6b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:25 [async_llm.py:261] Added request cmpl-261030bdb3644cbc934c0d3ec2f1e6b0-0.
INFO 03-02 00:44:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:44:26 [logger.py:42] Received request cmpl-0b464a8e9cad4248a3423888aa0d447d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:26 [async_llm.py:261] Added request cmpl-0b464a8e9cad4248a3423888aa0d447d-0.
INFO 03-02 00:44:27 [logger.py:42] Received request cmpl-55285b54388e49678614a8c118e48504-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:27 [async_llm.py:261] Added request cmpl-55285b54388e49678614a8c118e48504-0.
INFO 03-02 00:44:28 [logger.py:42] Received request cmpl-250fde816956481592a2fe6932eccd53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:28 [async_llm.py:261] Added request cmpl-250fde816956481592a2fe6932eccd53-0.
INFO 03-02 00:44:29 [logger.py:42] Received request cmpl-2804bcb693c245dc88797bc4ecfc1e4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:29 [async_llm.py:261] Added request cmpl-2804bcb693c245dc88797bc4ecfc1e4a-0.
INFO 03-02 00:44:30 [logger.py:42] Received request cmpl-611f058e076946808954552719b896c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:30 [async_llm.py:261] Added request cmpl-611f058e076946808954552719b896c3-0.
INFO 03-02 00:44:31 [logger.py:42] Received request cmpl-8078a509d1704bd7803b49dbc95967ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:31 [async_llm.py:261] Added request cmpl-8078a509d1704bd7803b49dbc95967ff-0.
INFO 03-02 00:44:32 [logger.py:42] Received request cmpl-565fc7a7fc4d473493fc9af2c9ec9ed9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:32 [async_llm.py:261] Added request cmpl-565fc7a7fc4d473493fc9af2c9ec9ed9-0.
INFO 03-02 00:44:33 [logger.py:42] Received request cmpl-9e7ab65e1346485594136d5b0ab356e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:33 [async_llm.py:261] Added request cmpl-9e7ab65e1346485594136d5b0ab356e6-0.
INFO 03-02 00:44:35 [logger.py:42] Received request cmpl-e9ccb8b4e0c24f34856468df37af3eb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:35 [async_llm.py:261] Added request cmpl-e9ccb8b4e0c24f34856468df37af3eb3-0.
INFO 03-02 00:44:36 [logger.py:42] Received request cmpl-2fb69208f68e472097720ce7ecd1a962-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:36 [async_llm.py:261] Added request cmpl-2fb69208f68e472097720ce7ecd1a962-0.
INFO 03-02 00:44:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:44:37 [logger.py:42] Received request cmpl-19798b1d044345dba884fd32ecf8cbdb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:37 [async_llm.py:261] Added request cmpl-19798b1d044345dba884fd32ecf8cbdb-0.
INFO 03-02 00:44:38 [logger.py:42] Received request cmpl-c0527a80a25b41e989fbcaec4c5b2dd9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:38 [async_llm.py:261] Added request cmpl-c0527a80a25b41e989fbcaec4c5b2dd9-0.
INFO 03-02 00:44:39 [logger.py:42] Received request cmpl-b7dd8cbdb80d401a87a3ec5b004dd646-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:39 [async_llm.py:261] Added request cmpl-b7dd8cbdb80d401a87a3ec5b004dd646-0.
INFO 03-02 00:44:40 [logger.py:42] Received request cmpl-90965175c5964c70abf64c46f2d31604-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:40 [async_llm.py:261] Added request cmpl-90965175c5964c70abf64c46f2d31604-0.
INFO 03-02 00:44:41 [logger.py:42] Received request cmpl-814422c7f078467093252b16e457fa37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:41 [async_llm.py:261] Added request cmpl-814422c7f078467093252b16e457fa37-0.
INFO 03-02 00:44:42 [logger.py:42] Received request cmpl-b447b207fe76439d862c79ef2e72c6db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:42 [async_llm.py:261] Added request cmpl-b447b207fe76439d862c79ef2e72c6db-0.
INFO 03-02 00:44:43 [logger.py:42] Received request cmpl-6bbfabbffe2d48c5a27dfa33ef0b5459-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:43 [async_llm.py:261] Added request cmpl-6bbfabbffe2d48c5a27dfa33ef0b5459-0.
INFO 03-02 00:44:44 [logger.py:42] Received request cmpl-f39846eae51f422da666cb32193c27b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:44 [async_llm.py:261] Added request cmpl-f39846eae51f422da666cb32193c27b6-0.
INFO 03-02 00:44:45 [logger.py:42] Received request cmpl-6fed56de832a44d18b12d39a48dba1be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:45 [async_llm.py:261] Added request cmpl-6fed56de832a44d18b12d39a48dba1be-0.
INFO 03-02 00:44:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:44:46 [logger.py:42] Received request cmpl-b9b64ceb7dd54ecbadad00ff735d046e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:46 [async_llm.py:261] Added request cmpl-b9b64ceb7dd54ecbadad00ff735d046e-0.
INFO 03-02 00:44:48 [logger.py:42] Received request cmpl-d8dd641b8d2a4e14a122aa6f741c6d54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:48 [async_llm.py:261] Added request cmpl-d8dd641b8d2a4e14a122aa6f741c6d54-0.
INFO 03-02 00:44:49 [logger.py:42] Received request cmpl-8d25cec8ed6a4ba2bbdf0c286bf64b9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:49 [async_llm.py:261] Added request cmpl-8d25cec8ed6a4ba2bbdf0c286bf64b9e-0.
INFO 03-02 00:44:50 [logger.py:42] Received request cmpl-a6f9292c274649539a43b1795287b5c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:50 [async_llm.py:261] Added request cmpl-a6f9292c274649539a43b1795287b5c8-0.
INFO 03-02 00:44:51 [logger.py:42] Received request cmpl-8949e6ad20834a1a8dedd7329e282e37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:51 [async_llm.py:261] Added request cmpl-8949e6ad20834a1a8dedd7329e282e37-0.
INFO 03-02 00:44:52 [logger.py:42] Received request cmpl-d0b5c2dfc9654bcc9e1f5d4e8d810a77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:52 [async_llm.py:261] Added request cmpl-d0b5c2dfc9654bcc9e1f5d4e8d810a77-0.
INFO 03-02 00:44:53 [logger.py:42] Received request cmpl-b9fcbb303f80425592d8128ecab864c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:53 [async_llm.py:261] Added request cmpl-b9fcbb303f80425592d8128ecab864c3-0.
INFO 03-02 00:44:54 [logger.py:42] Received request cmpl-27418e9e42c846b7bdec777403e81730-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:54 [async_llm.py:261] Added request cmpl-27418e9e42c846b7bdec777403e81730-0.
INFO 03-02 00:44:55 [logger.py:42] Received request cmpl-4edcb2aeb8d34b60a5547857da19a3d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:55 [async_llm.py:261] Added request cmpl-4edcb2aeb8d34b60a5547857da19a3d2-0.
INFO 03-02 00:44:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:44:56 [logger.py:42] Received request cmpl-f7a0e47aca9e47e5a97648dcf7a337d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:56 [async_llm.py:261] Added request cmpl-f7a0e47aca9e47e5a97648dcf7a337d2-0.
INFO 03-02 00:44:57 [logger.py:42] Received request cmpl-d28b8aa4d89745d2bff0f71304e22e8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:57 [async_llm.py:261] Added request cmpl-d28b8aa4d89745d2bff0f71304e22e8e-0.
INFO 03-02 00:44:58 [logger.py:42] Received request cmpl-63b2c327d3f44cc8868f8962506622d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:58 [async_llm.py:261] Added request cmpl-63b2c327d3f44cc8868f8962506622d4-0.
INFO 03-02 00:45:00 [logger.py:42] Received request cmpl-79c8dac8b043430d815dedd6dcedfec9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:00 [async_llm.py:261] Added request cmpl-79c8dac8b043430d815dedd6dcedfec9-0.
INFO 03-02 00:45:01 [logger.py:42] Received request cmpl-dd142790cda9486c9ac820de37071e49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:01 [async_llm.py:261] Added request cmpl-dd142790cda9486c9ac820de37071e49-0.
INFO 03-02 00:45:02 [logger.py:42] Received request cmpl-5368d9ce27f0418e9b974194ec6bf14e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:02 [async_llm.py:261] Added request cmpl-5368d9ce27f0418e9b974194ec6bf14e-0.
INFO 03-02 00:45:03 [logger.py:42] Received request cmpl-d528aa728c7e4f3f87e0f408c0838c6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:03 [async_llm.py:261] Added request cmpl-d528aa728c7e4f3f87e0f408c0838c6a-0.
INFO 03-02 00:45:04 [logger.py:42] Received request cmpl-385ce7d826024410bbc15a48c5f7984a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:04 [async_llm.py:261] Added request cmpl-385ce7d826024410bbc15a48c5f7984a-0.
INFO 03-02 00:45:05 [logger.py:42] Received request cmpl-3bfcc36b99334d0190e9371102886413-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:05 [async_llm.py:261] Added request cmpl-3bfcc36b99334d0190e9371102886413-0.
INFO 03-02 00:45:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:45:06 [logger.py:42] Received request cmpl-999c1535248f40e7bfe30ce59e6b3a1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:06 [async_llm.py:261] Added request cmpl-999c1535248f40e7bfe30ce59e6b3a1f-0.
INFO 03-02 00:45:07 [logger.py:42] Received request cmpl-8850d48f13934c48a24d67db8fe81e4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:07 [async_llm.py:261] Added request cmpl-8850d48f13934c48a24d67db8fe81e4f-0.
INFO 03-02 00:45:08 [logger.py:42] Received request cmpl-99108dd4a96b483c9ad74b3069f7a180-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:08 [async_llm.py:261] Added request cmpl-99108dd4a96b483c9ad74b3069f7a180-0.
INFO 03-02 00:45:09 [logger.py:42] Received request cmpl-2f27404d4d754322abf0d5452b4ef695-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:09 [async_llm.py:261] Added request cmpl-2f27404d4d754322abf0d5452b4ef695-0.
INFO 03-02 00:45:10 [logger.py:42] Received request cmpl-14f53885efde4a928fd32ef86b7ea37d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:10 [async_llm.py:261] Added request cmpl-14f53885efde4a928fd32ef86b7ea37d-0.
INFO 03-02 00:45:11 [logger.py:42] Received request cmpl-6659a22094404c06916e394215df4eee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:11 [async_llm.py:261] Added request cmpl-6659a22094404c06916e394215df4eee-0.
INFO 03-02 00:45:13 [logger.py:42] Received request cmpl-19919c9547324aee83d24550e9f5975b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:13 [async_llm.py:261] Added request cmpl-19919c9547324aee83d24550e9f5975b-0.
INFO 03-02 00:45:14 [logger.py:42] Received request cmpl-6008aa05fcc34b2d883c5b6858a704c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:14 [async_llm.py:261] Added request cmpl-6008aa05fcc34b2d883c5b6858a704c9-0.
INFO 03-02 00:45:15 [logger.py:42] Received request cmpl-0a6e33fc1ba14d968d4504bc5501fcfb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:15 [async_llm.py:261] Added request cmpl-0a6e33fc1ba14d968d4504bc5501fcfb-0.
INFO 03-02 00:45:16 [logger.py:42] Received request cmpl-600687fcd2184f6f8bee2c7946a7cbd7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:16 [async_llm.py:261] Added request cmpl-600687fcd2184f6f8bee2c7946a7cbd7-0.
INFO 03-02 00:45:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:45:17 [logger.py:42] Received request cmpl-0464cdfd044c48f9a847de07f4313518-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:17 [async_llm.py:261] Added request cmpl-0464cdfd044c48f9a847de07f4313518-0.
INFO 03-02 00:45:18 [logger.py:42] Received request cmpl-31cfa95a5235441abcdc95d9d6ffc79e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:18 [async_llm.py:261] Added request cmpl-31cfa95a5235441abcdc95d9d6ffc79e-0.
INFO 03-02 00:45:19 [logger.py:42] Received request cmpl-a3d112fe1fb747d9b5f4f12c62c8f78a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:19 [async_llm.py:261] Added request cmpl-a3d112fe1fb747d9b5f4f12c62c8f78a-0.
INFO 03-02 00:45:20 [logger.py:42] Received request cmpl-d8cb2c34c26749d5abe2ef414f192f65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:20 [async_llm.py:261] Added request cmpl-d8cb2c34c26749d5abe2ef414f192f65-0.
INFO 03-02 00:45:21 [logger.py:42] Received request cmpl-a10bdd9c3c1944a2b30dff4a8aef3377-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:21 [async_llm.py:261] Added request cmpl-a10bdd9c3c1944a2b30dff4a8aef3377-0.
INFO 03-02 00:45:22 [logger.py:42] Received request cmpl-3d512dd3e22a461f9ab3b032a9eae2d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:22 [async_llm.py:261] Added request cmpl-3d512dd3e22a461f9ab3b032a9eae2d0-0.
INFO 03-02 00:45:23 [logger.py:42] Received request cmpl-7cc92abe38284489806bc5fb55e18a4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:23 [async_llm.py:261] Added request cmpl-7cc92abe38284489806bc5fb55e18a4c-0.
INFO 03-02 00:45:24 [logger.py:42] Received request cmpl-b6587a129c824767a983075e3b3b3a02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:24 [async_llm.py:261] Added request cmpl-b6587a129c824767a983075e3b3b3a02-0.
INFO 03-02 00:45:26 [logger.py:42] Received request cmpl-7a8498ed76c149a581b03b6c5df99e03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:26 [async_llm.py:261] Added request cmpl-7a8498ed76c149a581b03b6c5df99e03-0.
INFO 03-02 00:45:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:45:27 [logger.py:42] Received request cmpl-f782dd0a586d49f18b9bc719fd39dba2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:27 [async_llm.py:261] Added request cmpl-f782dd0a586d49f18b9bc719fd39dba2-0.
INFO 03-02 00:45:28 [logger.py:42] Received request cmpl-de225ce8ecf142eba4b5010ac3742e8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:28 [async_llm.py:261] Added request cmpl-de225ce8ecf142eba4b5010ac3742e8a-0.
INFO 03-02 00:45:29 [logger.py:42] Received request cmpl-374fa1b860d247498ae266baa5317385-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:29 [async_llm.py:261] Added request cmpl-374fa1b860d247498ae266baa5317385-0.
INFO 03-02 00:45:30 [logger.py:42] Received request cmpl-7b3cf01c333349f9baf3e1257db9b453-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:30 [async_llm.py:261] Added request cmpl-7b3cf01c333349f9baf3e1257db9b453-0.
INFO 03-02 00:45:31 [logger.py:42] Received request cmpl-e193d766a1754a8fb0944f7353035a57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:31 [async_llm.py:261] Added request cmpl-e193d766a1754a8fb0944f7353035a57-0.
INFO 03-02 00:45:32 [logger.py:42] Received request cmpl-517f7061003b4229a8cedabb3f091653-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:32 [async_llm.py:261] Added request cmpl-517f7061003b4229a8cedabb3f091653-0.
INFO 03-02 00:45:33 [logger.py:42] Received request cmpl-1ec62f95fcbc41e4b3afb3844efcab82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:33 [async_llm.py:261] Added request cmpl-1ec62f95fcbc41e4b3afb3844efcab82-0.
INFO 03-02 00:45:34 [logger.py:42] Received request cmpl-aaa2a2670544417ca5baa003c34b8410-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:34 [async_llm.py:261] Added request cmpl-aaa2a2670544417ca5baa003c34b8410-0.
INFO 03-02 00:45:35 [logger.py:42] Received request cmpl-4795b930b5fb40d699913e1c3fe4c2a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:35 [async_llm.py:261] Added request cmpl-4795b930b5fb40d699913e1c3fe4c2a4-0.
INFO 03-02 00:45:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:45:36 [logger.py:42] Received request cmpl-ddab9967b14e4918ae802a625fbdec09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:36 [async_llm.py:261] Added request cmpl-ddab9967b14e4918ae802a625fbdec09-0.
INFO 03-02 00:45:37 [logger.py:42] Received request cmpl-e45d34f340e64c09b08c7889df67a632-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:37 [async_llm.py:261] Added request cmpl-e45d34f340e64c09b08c7889df67a632-0.
INFO 03-02 00:45:39 [logger.py:42] Received request cmpl-b2df05487384485c86271a2cae4aed10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:39 [async_llm.py:261] Added request cmpl-b2df05487384485c86271a2cae4aed10-0.
INFO 03-02 00:45:40 [logger.py:42] Received request cmpl-eefca9b632024dfdb46cdcc74a90952e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:40 [async_llm.py:261] Added request cmpl-eefca9b632024dfdb46cdcc74a90952e-0.
INFO 03-02 00:45:41 [logger.py:42] Received request cmpl-9cfbbf9214ca45019b2e78be095b898d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:41 [async_llm.py:261] Added request cmpl-9cfbbf9214ca45019b2e78be095b898d-0.
INFO 03-02 00:45:42 [logger.py:42] Received request cmpl-67bffbb33bfb448c9e43cdb05cebb03b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:42 [async_llm.py:261] Added request cmpl-67bffbb33bfb448c9e43cdb05cebb03b-0.
INFO 03-02 00:45:43 [logger.py:42] Received request cmpl-b1dc40c29c1142fb8ee393325395c9d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:43 [async_llm.py:261] Added request cmpl-b1dc40c29c1142fb8ee393325395c9d8-0.
INFO 03-02 00:45:44 [logger.py:42] Received request cmpl-5bcf23813bbd49208ea894c063ed4a12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:44 [async_llm.py:261] Added request cmpl-5bcf23813bbd49208ea894c063ed4a12-0.
INFO 03-02 00:45:45 [logger.py:42] Received request cmpl-98a1ab0ce4c44125a5cb4e05fc285a61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:45 [async_llm.py:261] Added request cmpl-98a1ab0ce4c44125a5cb4e05fc285a61-0.
INFO 03-02 00:45:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:45:46 [logger.py:42] Received request cmpl-43b3895b014a497b9d80e56f8667fc74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:46 [async_llm.py:261] Added request cmpl-43b3895b014a497b9d80e56f8667fc74-0.
INFO 03-02 00:45:47 [logger.py:42] Received request cmpl-7631e1b6ba09432c8b2de0b44e8df6a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:47 [async_llm.py:261] Added request cmpl-7631e1b6ba09432c8b2de0b44e8df6a9-0.
INFO 03-02 00:45:48 [logger.py:42] Received request cmpl-6f24da25fb97426ab79f11730400cf4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:48 [async_llm.py:261] Added request cmpl-6f24da25fb97426ab79f11730400cf4f-0.
INFO 03-02 00:45:49 [logger.py:42] Received request cmpl-6f90e52120d34c1dbff8ca853e1b5f00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:49 [async_llm.py:261] Added request cmpl-6f90e52120d34c1dbff8ca853e1b5f00-0.
INFO 03-02 00:45:50 [logger.py:42] Received request cmpl-cd23868a898b463da9d405ce55e458d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:50 [async_llm.py:261] Added request cmpl-cd23868a898b463da9d405ce55e458d7-0.
INFO 03-02 00:45:52 [logger.py:42] Received request cmpl-ca3c68fbe5364aeaa99dbe708e963a7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:52 [async_llm.py:261] Added request cmpl-ca3c68fbe5364aeaa99dbe708e963a7c-0.
INFO 03-02 00:45:53 [logger.py:42] Received request cmpl-1a653b07b4aa4a26aee189a93dcb9aee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:53 [async_llm.py:261] Added request cmpl-1a653b07b4aa4a26aee189a93dcb9aee-0.
INFO 03-02 00:45:54 [logger.py:42] Received request cmpl-52d2bbd909ec4c5ca7384882fe30011c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:54 [async_llm.py:261] Added request cmpl-52d2bbd909ec4c5ca7384882fe30011c-0.
INFO 03-02 00:45:55 [logger.py:42] Received request cmpl-97c0232627614237ba2720c6fc1a6c26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:55 [async_llm.py:261] Added request cmpl-97c0232627614237ba2720c6fc1a6c26-0.
INFO 03-02 00:45:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:45:56 [logger.py:42] Received request cmpl-ab6e921b4c2e4a22b8a87af8bb5bd04d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:56 [async_llm.py:261] Added request cmpl-ab6e921b4c2e4a22b8a87af8bb5bd04d-0.
INFO 03-02 00:45:57 [logger.py:42] Received request cmpl-736201ffa8f54f528eef382ce8637b65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:57 [async_llm.py:261] Added request cmpl-736201ffa8f54f528eef382ce8637b65-0.
INFO 03-02 00:45:58 [logger.py:42] Received request cmpl-f0b1f5230f0249c9b6fd325da74d854c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:58 [async_llm.py:261] Added request cmpl-f0b1f5230f0249c9b6fd325da74d854c-0.
INFO 03-02 00:45:59 [logger.py:42] Received request cmpl-4d4ea2b4688844b39e36fc5eb68d202a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:59 [async_llm.py:261] Added request cmpl-4d4ea2b4688844b39e36fc5eb68d202a-0.
INFO 03-02 00:46:00 [logger.py:42] Received request cmpl-2232cb1110da409fa1dd3a95730acc83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:00 [async_llm.py:261] Added request cmpl-2232cb1110da409fa1dd3a95730acc83-0.
INFO 03-02 00:46:01 [logger.py:42] Received request cmpl-fa689678ff424984962c675a45ea2168-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:01 [async_llm.py:261] Added request cmpl-fa689678ff424984962c675a45ea2168-0.
INFO 03-02 00:46:02 [logger.py:42] Received request cmpl-a2115ca191284613a7a79ae726b5121d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:02 [async_llm.py:261] Added request cmpl-a2115ca191284613a7a79ae726b5121d-0.
INFO 03-02 00:46:03 [logger.py:42] Received request cmpl-37c5e99b8642468695d7d59fcf4d29af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:03 [async_llm.py:261] Added request cmpl-37c5e99b8642468695d7d59fcf4d29af-0.
INFO 03-02 00:46:05 [logger.py:42] Received request cmpl-7e9c503de58b4057869ed6e846b4c519-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:05 [async_llm.py:261] Added request cmpl-7e9c503de58b4057869ed6e846b4c519-0.
INFO 03-02 00:46:06 [logger.py:42] Received request cmpl-18b9a59cf8b84b719f6111bfec00327f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:06 [async_llm.py:261] Added request cmpl-18b9a59cf8b84b719f6111bfec00327f-0.
INFO 03-02 00:46:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:46:07 [logger.py:42] Received request cmpl-1418933c24ff465184245458b15853f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:07 [async_llm.py:261] Added request cmpl-1418933c24ff465184245458b15853f1-0.
INFO 03-02 00:46:08 [logger.py:42] Received request cmpl-feeff5396ab94c489da02a826f217c4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:08 [async_llm.py:261] Added request cmpl-feeff5396ab94c489da02a826f217c4f-0.
INFO 03-02 00:46:09 [logger.py:42] Received request cmpl-c54cd98458984ae396a571eed1d989ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:09 [async_llm.py:261] Added request cmpl-c54cd98458984ae396a571eed1d989ce-0.
INFO 03-02 00:46:10 [logger.py:42] Received request cmpl-110ff75a96b748038a2e8b5e97027ad6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:10 [async_llm.py:261] Added request cmpl-110ff75a96b748038a2e8b5e97027ad6-0.
INFO 03-02 00:46:11 [logger.py:42] Received request cmpl-6a081b40a45a49298ef9ed7c0fcfefa0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:11 [async_llm.py:261] Added request cmpl-6a081b40a45a49298ef9ed7c0fcfefa0-0.
INFO 03-02 00:46:12 [logger.py:42] Received request cmpl-4251e0d217e14d3084d03f0a46572a6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:12 [async_llm.py:261] Added request cmpl-4251e0d217e14d3084d03f0a46572a6c-0.
INFO 03-02 00:46:13 [logger.py:42] Received request cmpl-101efa1a5c204030943f175abcba6223-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:13 [async_llm.py:261] Added request cmpl-101efa1a5c204030943f175abcba6223-0.
INFO 03-02 00:46:14 [logger.py:42] Received request cmpl-839fa30678694b24b334f295010f48f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:14 [async_llm.py:261] Added request cmpl-839fa30678694b24b334f295010f48f3-0.
INFO 03-02 00:46:15 [logger.py:42] Received request cmpl-0e642224226a4515b2a0d42d56f10dcd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:15 [async_llm.py:261] Added request cmpl-0e642224226a4515b2a0d42d56f10dcd-0.
INFO 03-02 00:46:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:46:16 [logger.py:42] Received request cmpl-29ebd028d3b743abadce859219e35d81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:16 [async_llm.py:261] Added request cmpl-29ebd028d3b743abadce859219e35d81-0.
INFO 03-02 00:46:18 [logger.py:42] Received request cmpl-1de46a485fcf42a298aef2661194074e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:18 [async_llm.py:261] Added request cmpl-1de46a485fcf42a298aef2661194074e-0.
INFO 03-02 00:46:19 [logger.py:42] Received request cmpl-9439bba07129459b81179999588fc5fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:19 [async_llm.py:261] Added request cmpl-9439bba07129459b81179999588fc5fd-0.
INFO 03-02 00:46:20 [logger.py:42] Received request cmpl-2c4a2147a3494aeb84052ccb919a7aa0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:20 [async_llm.py:261] Added request cmpl-2c4a2147a3494aeb84052ccb919a7aa0-0.
INFO 03-02 00:46:21 [logger.py:42] Received request cmpl-e9780d0faf7545de976d5496f65da358-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:21 [async_llm.py:261] Added request cmpl-e9780d0faf7545de976d5496f65da358-0.
INFO 03-02 00:46:22 [logger.py:42] Received request cmpl-3f580df7ff534ae583b6d4e5b6f9f084-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:22 [async_llm.py:261] Added request cmpl-3f580df7ff534ae583b6d4e5b6f9f084-0.
INFO 03-02 00:46:23 [logger.py:42] Received request cmpl-958dd2b232484c8f9affc5a1a5468bca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:23 [async_llm.py:261] Added request cmpl-958dd2b232484c8f9affc5a1a5468bca-0.
INFO 03-02 00:46:24 [logger.py:42] Received request cmpl-f0de0f6ffdf64399bfc748b6960d6d62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:24 [async_llm.py:261] Added request cmpl-f0de0f6ffdf64399bfc748b6960d6d62-0.
INFO 03-02 00:46:25 [logger.py:42] Received request cmpl-76fd6d9685d84d1b9b46b80e7fc30dd0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:25 [async_llm.py:261] Added request cmpl-76fd6d9685d84d1b9b46b80e7fc30dd0-0.
INFO 03-02 00:46:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:46:26 [logger.py:42] Received request cmpl-349a522a55134493b5078dc3c1491529-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:26 [async_llm.py:261] Added request cmpl-349a522a55134493b5078dc3c1491529-0.
INFO 03-02 00:46:27 [logger.py:42] Received request cmpl-141910bc0ec94b95b5e63982705e4765-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:27 [async_llm.py:261] Added request cmpl-141910bc0ec94b95b5e63982705e4765-0.
INFO 03-02 00:46:28 [logger.py:42] Received request cmpl-0afe87b710474be7bc2f6f267172c6f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:28 [async_llm.py:261] Added request cmpl-0afe87b710474be7bc2f6f267172c6f0-0.
INFO 03-02 00:46:29 [logger.py:42] Received request cmpl-b4296405c0764e1ba07e1ba87672c430-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:29 [async_llm.py:261] Added request cmpl-b4296405c0764e1ba07e1ba87672c430-0.
INFO 03-02 00:46:31 [logger.py:42] Received request cmpl-89e5119e9b3b4499bb380b0e3359d004-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:31 [async_llm.py:261] Added request cmpl-89e5119e9b3b4499bb380b0e3359d004-0.
INFO 03-02 00:46:32 [logger.py:42] Received request cmpl-5b292c4e99be4c45a473e0b17eaddf98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:32 [async_llm.py:261] Added request cmpl-5b292c4e99be4c45a473e0b17eaddf98-0.
INFO 03-02 00:46:33 [logger.py:42] Received request cmpl-08216aa77de84e428f815114b2a68bf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:33 [async_llm.py:261] Added request cmpl-08216aa77de84e428f815114b2a68bf4-0.
INFO 03-02 00:46:34 [logger.py:42] Received request cmpl-148ee843f6964669b3c4d7063bc35c46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:34 [async_llm.py:261] Added request cmpl-148ee843f6964669b3c4d7063bc35c46-0.
INFO 03-02 00:46:35 [logger.py:42] Received request cmpl-e7e910d9d1784025b832ad563866ec8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:35 [async_llm.py:261] Added request cmpl-e7e910d9d1784025b832ad563866ec8b-0.
INFO 03-02 00:46:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:46:36 [logger.py:42] Received request cmpl-2dbf9fea7aa34083bca2fe7ac8d726ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:36 [async_llm.py:261] Added request cmpl-2dbf9fea7aa34083bca2fe7ac8d726ca-0.
INFO 03-02 00:46:37 [logger.py:42] Received request cmpl-61894f76ee6b49108e4bcadc02939ec8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:37 [async_llm.py:261] Added request cmpl-61894f76ee6b49108e4bcadc02939ec8-0.
INFO 03-02 00:46:38 [logger.py:42] Received request cmpl-0ddcac5f661e45bdb44d88108bf9136c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:38 [async_llm.py:261] Added request cmpl-0ddcac5f661e45bdb44d88108bf9136c-0.
INFO 03-02 00:46:39 [logger.py:42] Received request cmpl-82187f00141e460fb388403a7f4814a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:39 [async_llm.py:261] Added request cmpl-82187f00141e460fb388403a7f4814a8-0.
INFO 03-02 00:46:40 [logger.py:42] Received request cmpl-ad93af57edc845458dddb555d34f34f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:40 [async_llm.py:261] Added request cmpl-ad93af57edc845458dddb555d34f34f7-0.
INFO 03-02 00:46:41 [logger.py:42] Received request cmpl-341c4a66e2bd43488e658b8eff09c5cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:41 [async_llm.py:261] Added request cmpl-341c4a66e2bd43488e658b8eff09c5cb-0.
INFO 03-02 00:46:42 [logger.py:42] Received request cmpl-cf7a4ab4194f42e99f227232595b78b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:42 [async_llm.py:261] Added request cmpl-cf7a4ab4194f42e99f227232595b78b5-0.
INFO 03-02 00:46:44 [logger.py:42] Received request cmpl-7e38135821b54065ac03590ad54a07e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:44 [async_llm.py:261] Added request cmpl-7e38135821b54065ac03590ad54a07e7-0.
INFO 03-02 00:46:45 [logger.py:42] Received request cmpl-a0832d4e6e2040efb0bc6540a192ec22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:45 [async_llm.py:261] Added request cmpl-a0832d4e6e2040efb0bc6540a192ec22-0.
INFO 03-02 00:46:46 [logger.py:42] Received request cmpl-2f54784f71cb45518b0f19f74555e3e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:46 [async_llm.py:261] Added request cmpl-2f54784f71cb45518b0f19f74555e3e2-0.
INFO 03-02 00:46:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:46:47 [logger.py:42] Received request cmpl-a227361866ed4f57aadc4c3948d21d23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:47 [async_llm.py:261] Added request cmpl-a227361866ed4f57aadc4c3948d21d23-0.
INFO 03-02 00:46:48 [logger.py:42] Received request cmpl-c83d75bd00814bbdb1007c7cf629d90d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:48 [async_llm.py:261] Added request cmpl-c83d75bd00814bbdb1007c7cf629d90d-0.
INFO 03-02 00:46:49 [logger.py:42] Received request cmpl-40d734150a7645ba8cc87509f09cdfbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:49 [async_llm.py:261] Added request cmpl-40d734150a7645ba8cc87509f09cdfbe-0.
INFO 03-02 00:46:50 [logger.py:42] Received request cmpl-a2738b8d8fd445d19ebaacc7fc423eb1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:50 [async_llm.py:261] Added request cmpl-a2738b8d8fd445d19ebaacc7fc423eb1-0.
INFO 03-02 00:46:51 [logger.py:42] Received request cmpl-77ce155c865841da9af00f8e826005b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:51 [async_llm.py:261] Added request cmpl-77ce155c865841da9af00f8e826005b8-0.
INFO 03-02 00:46:52 [logger.py:42] Received request cmpl-d7bfc7e6634f43d1ae7c14f946d2bfb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:52 [async_llm.py:261] Added request cmpl-d7bfc7e6634f43d1ae7c14f946d2bfb2-0.
INFO 03-02 00:46:53 [logger.py:42] Received request cmpl-d117c37c657a44baae3b041b40e6d387-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:53 [async_llm.py:261] Added request cmpl-d117c37c657a44baae3b041b40e6d387-0.
INFO 03-02 00:46:54 [logger.py:42] Received request cmpl-492ede08ec624b04a4213ba4571321a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:54 [async_llm.py:261] Added request cmpl-492ede08ec624b04a4213ba4571321a5-0.
INFO 03-02 00:46:56 [logger.py:42] Received request cmpl-b7900fd1909b4a8f9186b134069f12b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:56 [async_llm.py:261] Added request cmpl-b7900fd1909b4a8f9186b134069f12b6-0.
INFO 03-02 00:46:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:46:57 [logger.py:42] Received request cmpl-d4abc3535e0c4adeabf1235d57ff6f66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:57 [async_llm.py:261] Added request cmpl-d4abc3535e0c4adeabf1235d57ff6f66-0.
INFO 03-02 00:46:58 [logger.py:42] Received request cmpl-99b871709ad3411cb06ffefb4c3ae18a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:58 [async_llm.py:261] Added request cmpl-99b871709ad3411cb06ffefb4c3ae18a-0.
INFO 03-02 00:46:59 [logger.py:42] Received request cmpl-495c1b5f9e164b3eacd497ff580a7bd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:59 [async_llm.py:261] Added request cmpl-495c1b5f9e164b3eacd497ff580a7bd1-0.
INFO 03-02 00:47:00 [logger.py:42] Received request cmpl-c950180e7465443da443671b5edc2f7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:00 [async_llm.py:261] Added request cmpl-c950180e7465443da443671b5edc2f7b-0.
INFO 03-02 00:47:01 [logger.py:42] Received request cmpl-a955127bb60842e1bebc4f28d1d24d35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:01 [async_llm.py:261] Added request cmpl-a955127bb60842e1bebc4f28d1d24d35-0.
INFO 03-02 00:47:02 [logger.py:42] Received request cmpl-750ea4cef01145f9b19a3ad91a859fb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:02 [async_llm.py:261] Added request cmpl-750ea4cef01145f9b19a3ad91a859fb0-0.
INFO 03-02 00:47:03 [logger.py:42] Received request cmpl-7c3dc05985444635bae7143993a6cc21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:03 [async_llm.py:261] Added request cmpl-7c3dc05985444635bae7143993a6cc21-0.
INFO 03-02 00:47:04 [logger.py:42] Received request cmpl-407de39de5824643ba1fcdcbb3244af1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:04 [async_llm.py:261] Added request cmpl-407de39de5824643ba1fcdcbb3244af1-0.
INFO 03-02 00:47:05 [logger.py:42] Received request cmpl-3e25dfaa8792430cb27fed90998505c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:05 [async_llm.py:261] Added request cmpl-3e25dfaa8792430cb27fed90998505c2-0.
INFO 03-02 00:47:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:47:06 [logger.py:42] Received request cmpl-60c66b479db345e9bed610fee0062a72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:06 [async_llm.py:261] Added request cmpl-60c66b479db345e9bed610fee0062a72-0.
INFO 03-02 00:47:07 [logger.py:42] Received request cmpl-02b14d461b2946d1b7f4e4198299065d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:07 [async_llm.py:261] Added request cmpl-02b14d461b2946d1b7f4e4198299065d-0.
INFO 03-02 00:47:09 [logger.py:42] Received request cmpl-8fcf6e8437ac447abe377fb0ad56fc20-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:09 [async_llm.py:261] Added request cmpl-8fcf6e8437ac447abe377fb0ad56fc20-0.
INFO 03-02 00:47:10 [logger.py:42] Received request cmpl-0fcd02692213468d8a9b78b222e292e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:10 [async_llm.py:261] Added request cmpl-0fcd02692213468d8a9b78b222e292e7-0.
INFO 03-02 00:47:11 [logger.py:42] Received request cmpl-2c32185e748945818ed8ae983e1f5a24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:11 [async_llm.py:261] Added request cmpl-2c32185e748945818ed8ae983e1f5a24-0.
INFO 03-02 00:47:12 [logger.py:42] Received request cmpl-ba29269ff688403488a7ce7bd36de568-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:12 [async_llm.py:261] Added request cmpl-ba29269ff688403488a7ce7bd36de568-0.
INFO 03-02 00:47:13 [logger.py:42] Received request cmpl-6a9a9929ef80405ea121dab7a43493c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:13 [async_llm.py:261] Added request cmpl-6a9a9929ef80405ea121dab7a43493c0-0.
INFO 03-02 00:47:14 [logger.py:42] Received request cmpl-c67c557a54de407cb457fe764a5cd487-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:14 [async_llm.py:261] Added request cmpl-c67c557a54de407cb457fe764a5cd487-0.
INFO 03-02 00:47:15 [logger.py:42] Received request cmpl-72a6dde1986e4957bdfa465e54552fde-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:15 [async_llm.py:261] Added request cmpl-72a6dde1986e4957bdfa465e54552fde-0.
INFO 03-02 00:47:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:47:16 [logger.py:42] Received request cmpl-7d1e1f55b6eb4dd7931c2a12160266b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:16 [async_llm.py:261] Added request cmpl-7d1e1f55b6eb4dd7931c2a12160266b4-0.
INFO 03-02 00:47:17 [logger.py:42] Received request cmpl-37986d2b1a084b33bab12f02701fe41f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:17 [async_llm.py:261] Added request cmpl-37986d2b1a084b33bab12f02701fe41f-0.
INFO 03-02 00:47:18 [logger.py:42] Received request cmpl-7e5327bda6894a74ba2052edbacc10c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:18 [async_llm.py:261] Added request cmpl-7e5327bda6894a74ba2052edbacc10c7-0.
INFO 03-02 00:47:19 [logger.py:42] Received request cmpl-48c3fd65d036446782a2d02d3fcf46b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:19 [async_llm.py:261] Added request cmpl-48c3fd65d036446782a2d02d3fcf46b9-0.
INFO 03-02 00:47:20 [logger.py:42] Received request cmpl-8c274df8935a4b72800330b1ea1829e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:20 [async_llm.py:261] Added request cmpl-8c274df8935a4b72800330b1ea1829e0-0.
INFO 03-02 00:47:22 [logger.py:42] Received request cmpl-c2e209693d9c4bd08560842b1c360590-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:22 [async_llm.py:261] Added request cmpl-c2e209693d9c4bd08560842b1c360590-0.
INFO 03-02 00:47:23 [logger.py:42] Received request cmpl-abdc78072403432aa1a66cb58420aed4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:23 [async_llm.py:261] Added request cmpl-abdc78072403432aa1a66cb58420aed4-0.
INFO 03-02 00:47:24 [logger.py:42] Received request cmpl-8a7ffe4120cf41b99ad1338f02095542-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:24 [async_llm.py:261] Added request cmpl-8a7ffe4120cf41b99ad1338f02095542-0.
INFO 03-02 00:47:25 [logger.py:42] Received request cmpl-24b5ea942ed44707976a233485f7dcba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:25 [async_llm.py:261] Added request cmpl-24b5ea942ed44707976a233485f7dcba-0.
INFO 03-02 00:47:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:47:26 [logger.py:42] Received request cmpl-a21a73c68975477dbb4ac61c56db62a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:26 [async_llm.py:261] Added request cmpl-a21a73c68975477dbb4ac61c56db62a5-0.
INFO 03-02 00:47:27 [logger.py:42] Received request cmpl-52c59bb47ec04c53bdc89df64c02769c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:27 [async_llm.py:261] Added request cmpl-52c59bb47ec04c53bdc89df64c02769c-0.
INFO 03-02 00:47:28 [logger.py:42] Received request cmpl-9968153a82404dc3b234b645a97ba79b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:28 [async_llm.py:261] Added request cmpl-9968153a82404dc3b234b645a97ba79b-0.
INFO 03-02 00:47:29 [logger.py:42] Received request cmpl-14e6879e66344334aa2de02c29465756-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:29 [async_llm.py:261] Added request cmpl-14e6879e66344334aa2de02c29465756-0.
INFO 03-02 00:47:30 [logger.py:42] Received request cmpl-31438c2210dc4328b0c7a4d637cdaf01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:30 [async_llm.py:261] Added request cmpl-31438c2210dc4328b0c7a4d637cdaf01-0.
INFO 03-02 00:47:31 [logger.py:42] Received request cmpl-0846f7765e804238a3223ad2767a13c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:31 [async_llm.py:261] Added request cmpl-0846f7765e804238a3223ad2767a13c1-0.
INFO 03-02 00:47:32 [logger.py:42] Received request cmpl-99631923869e4565803b536531f7f6ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:32 [async_llm.py:261] Added request cmpl-99631923869e4565803b536531f7f6ef-0.
INFO 03-02 00:47:33 [logger.py:42] Received request cmpl-54b611f18e1940449bc5fb6f13c903d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:33 [async_llm.py:261] Added request cmpl-54b611f18e1940449bc5fb6f13c903d1-0.
INFO 03-02 00:47:35 [logger.py:42] Received request cmpl-894640d7f865481bb99191f162da5a55-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:35 [async_llm.py:261] Added request cmpl-894640d7f865481bb99191f162da5a55-0.
INFO 03-02 00:47:36 [logger.py:42] Received request cmpl-2041d78f218e4dd6a65ac3c5e7175902-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:36 [async_llm.py:261] Added request cmpl-2041d78f218e4dd6a65ac3c5e7175902-0.
INFO 03-02 00:47:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:47:37 [logger.py:42] Received request cmpl-d560a1cb940e45b0ae20ca3a367b281b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:37 [async_llm.py:261] Added request cmpl-d560a1cb940e45b0ae20ca3a367b281b-0.
INFO 03-02 00:47:38 [logger.py:42] Received request cmpl-5aae8e56fb1c4d9ca52e3a823d7ce767-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:38 [async_llm.py:261] Added request cmpl-5aae8e56fb1c4d9ca52e3a823d7ce767-0.
INFO 03-02 00:47:39 [logger.py:42] Received request cmpl-8de8821178084edca17db929a69195da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:39 [async_llm.py:261] Added request cmpl-8de8821178084edca17db929a69195da-0.
INFO 03-02 00:47:40 [logger.py:42] Received request cmpl-83f936b1dfff4a158013b8517330fb67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:40 [async_llm.py:261] Added request cmpl-83f936b1dfff4a158013b8517330fb67-0.
INFO 03-02 00:47:41 [logger.py:42] Received request cmpl-061ea6649d464a2a9383dfe2dbac10c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:41 [async_llm.py:261] Added request cmpl-061ea6649d464a2a9383dfe2dbac10c2-0.
INFO 03-02 00:47:42 [logger.py:42] Received request cmpl-948e92f2c2494d79b870229fa2d80f3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:42 [async_llm.py:261] Added request cmpl-948e92f2c2494d79b870229fa2d80f3c-0.
INFO 03-02 00:47:43 [logger.py:42] Received request cmpl-12d4747abf3040b19a58ff2e43bf96c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:43 [async_llm.py:261] Added request cmpl-12d4747abf3040b19a58ff2e43bf96c3-0.
INFO 03-02 00:47:44 [logger.py:42] Received request cmpl-e107da7124374f4c8b03222b0597e3bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:44 [async_llm.py:261] Added request cmpl-e107da7124374f4c8b03222b0597e3bf-0.
INFO 03-02 00:47:45 [logger.py:42] Received request cmpl-159af56f38664472a89e83e8bbcef35e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:45 [async_llm.py:261] Added request cmpl-159af56f38664472a89e83e8bbcef35e-0.
INFO 03-02 00:47:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:47:46 [logger.py:42] Received request cmpl-2305d366662a424c81db46aeea1162d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:46 [async_llm.py:261] Added request cmpl-2305d366662a424c81db46aeea1162d7-0.
INFO 03-02 00:47:48 [logger.py:42] Received request cmpl-dd07f0f9aa304371beea3b9eaa7fcce0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:48 [async_llm.py:261] Added request cmpl-dd07f0f9aa304371beea3b9eaa7fcce0-0.
INFO 03-02 00:47:49 [logger.py:42] Received request cmpl-a97e32635b4641e2bbe90a3729600709-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:49 [async_llm.py:261] Added request cmpl-a97e32635b4641e2bbe90a3729600709-0.
INFO 03-02 00:47:50 [logger.py:42] Received request cmpl-4316aced49a5409da56817ae7230122a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:50 [async_llm.py:261] Added request cmpl-4316aced49a5409da56817ae7230122a-0.
INFO 03-02 00:47:51 [logger.py:42] Received request cmpl-50877c06a22847239970132faa0af716-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:51 [async_llm.py:261] Added request cmpl-50877c06a22847239970132faa0af716-0.
INFO 03-02 00:47:52 [logger.py:42] Received request cmpl-e7060064aa1f445ab0688bb2f88cf4bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:52 [async_llm.py:261] Added request cmpl-e7060064aa1f445ab0688bb2f88cf4bf-0.
INFO 03-02 00:47:53 [logger.py:42] Received request cmpl-e6efd9f9bde24d6f8dcd90dd2c31833a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:53 [async_llm.py:261] Added request cmpl-e6efd9f9bde24d6f8dcd90dd2c31833a-0.
INFO 03-02 00:47:54 [logger.py:42] Received request cmpl-5b3387750fbd4bb2b12e9ab8f2ed49b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:54 [async_llm.py:261] Added request cmpl-5b3387750fbd4bb2b12e9ab8f2ed49b6-0.
INFO 03-02 00:47:55 [logger.py:42] Received request cmpl-80715ce493214134a19d2b4634acbbe1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:55 [async_llm.py:261] Added request cmpl-80715ce493214134a19d2b4634acbbe1-0.
INFO 03-02 00:47:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:47:56 [logger.py:42] Received request cmpl-b24ebf1cf3424f2e8f7136efb22404fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:56 [async_llm.py:261] Added request cmpl-b24ebf1cf3424f2e8f7136efb22404fd-0.
INFO 03-02 00:47:57 [logger.py:42] Received request cmpl-b2e3ae58a6654482acb17fda1f93e70a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:57 [async_llm.py:261] Added request cmpl-b2e3ae58a6654482acb17fda1f93e70a-0.
INFO 03-02 00:47:58 [logger.py:42] Received request cmpl-49314be6285848deb17ad849e321d7ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:58 [async_llm.py:261] Added request cmpl-49314be6285848deb17ad849e321d7ee-0.
INFO 03-02 00:47:59 [logger.py:42] Received request cmpl-fffe8474a9954a4fa05c93a0b3be7ad2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:59 [async_llm.py:261] Added request cmpl-fffe8474a9954a4fa05c93a0b3be7ad2-0.
INFO 03-02 00:48:01 [logger.py:42] Received request cmpl-9e28b6713de14aa78cf9eb545feb605f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:01 [async_llm.py:261] Added request cmpl-9e28b6713de14aa78cf9eb545feb605f-0.
INFO 03-02 00:48:02 [logger.py:42] Received request cmpl-e913154a788f4575aaccadc2cb022695-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:02 [async_llm.py:261] Added request cmpl-e913154a788f4575aaccadc2cb022695-0.
INFO 03-02 00:48:03 [logger.py:42] Received request cmpl-99a461455fac4b69b16ddc6c413f976e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:03 [async_llm.py:261] Added request cmpl-99a461455fac4b69b16ddc6c413f976e-0.
INFO 03-02 00:48:04 [logger.py:42] Received request cmpl-d3b690c1f8fd4557b7023aa6db193c43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:04 [async_llm.py:261] Added request cmpl-d3b690c1f8fd4557b7023aa6db193c43-0.
INFO 03-02 00:48:05 [logger.py:42] Received request cmpl-790949f5b65e4035ac9b325f339b59b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:05 [async_llm.py:261] Added request cmpl-790949f5b65e4035ac9b325f339b59b2-0.
INFO 03-02 00:48:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:48:06 [logger.py:42] Received request cmpl-ef24f000f64844aca7f85664e2ba4727-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:06 [async_llm.py:261] Added request cmpl-ef24f000f64844aca7f85664e2ba4727-0.
INFO 03-02 00:48:07 [logger.py:42] Received request cmpl-e09b4328f48444df991255034f57a70d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:07 [async_llm.py:261] Added request cmpl-e09b4328f48444df991255034f57a70d-0.
INFO 03-02 00:48:08 [logger.py:42] Received request cmpl-e2c26d7db1fd408c98bde0a80869cde4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:08 [async_llm.py:261] Added request cmpl-e2c26d7db1fd408c98bde0a80869cde4-0.
INFO 03-02 00:48:09 [logger.py:42] Received request cmpl-efd58c60ada540cdb117cb4ce81bd877-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:09 [async_llm.py:261] Added request cmpl-efd58c60ada540cdb117cb4ce81bd877-0.
INFO 03-02 00:48:10 [logger.py:42] Received request cmpl-6d0df8d8b92b415882ec90aa567a3ffa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:10 [async_llm.py:261] Added request cmpl-6d0df8d8b92b415882ec90aa567a3ffa-0.
INFO 03-02 00:48:11 [logger.py:42] Received request cmpl-271369df778542a286763fadcc85ee97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:11 [async_llm.py:261] Added request cmpl-271369df778542a286763fadcc85ee97-0.
INFO 03-02 00:48:12 [logger.py:42] Received request cmpl-81bde86e5f46474985defb098bdb2961-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:12 [async_llm.py:261] Added request cmpl-81bde86e5f46474985defb098bdb2961-0.
INFO 03-02 00:48:14 [logger.py:42] Received request cmpl-84b3eb3b22c240e2be71b4938894cbfc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:14 [async_llm.py:261] Added request cmpl-84b3eb3b22c240e2be71b4938894cbfc-0.
INFO 03-02 00:48:15 [logger.py:42] Received request cmpl-1e9a9a14ac064b0daf5d405174341cbb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:15 [async_llm.py:261] Added request cmpl-1e9a9a14ac064b0daf5d405174341cbb-0.
INFO 03-02 00:48:16 [logger.py:42] Received request cmpl-e0bcea74bb7c4e1a991e12e6eccc0caa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:16 [async_llm.py:261] Added request cmpl-e0bcea74bb7c4e1a991e12e6eccc0caa-0.
INFO 03-02 00:48:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:48:17 [logger.py:42] Received request cmpl-493c78bdb03744bd9a904249896f3c34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:17 [async_llm.py:261] Added request cmpl-493c78bdb03744bd9a904249896f3c34-0.
INFO 03-02 00:48:18 [logger.py:42] Received request cmpl-f10a901e9d334019b7fbef7c52e62081-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:18 [async_llm.py:261] Added request cmpl-f10a901e9d334019b7fbef7c52e62081-0.
INFO 03-02 00:48:19 [logger.py:42] Received request cmpl-ec7f0c0814d74526a2781300ce43e131-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:19 [async_llm.py:261] Added request cmpl-ec7f0c0814d74526a2781300ce43e131-0.
INFO 03-02 00:48:20 [logger.py:42] Received request cmpl-dc9da5bcb95c473dae310784383ece4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:20 [async_llm.py:261] Added request cmpl-dc9da5bcb95c473dae310784383ece4e-0.
INFO 03-02 00:48:21 [logger.py:42] Received request cmpl-e37ec793f45d420c891e9ab52ef9d2e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:21 [async_llm.py:261] Added request cmpl-e37ec793f45d420c891e9ab52ef9d2e1-0.
INFO 03-02 00:48:22 [logger.py:42] Received request cmpl-8c0ca1798e5f4bee802adb3f7220072f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:22 [async_llm.py:261] Added request cmpl-8c0ca1798e5f4bee802adb3f7220072f-0.
INFO 03-02 00:48:23 [logger.py:42] Received request cmpl-08809cc113854a60a4bac9990a909282-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:23 [async_llm.py:261] Added request cmpl-08809cc113854a60a4bac9990a909282-0.
INFO 03-02 00:48:24 [logger.py:42] Received request cmpl-1cc5f51f2415492dba65c12ba90f5426-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:24 [async_llm.py:261] Added request cmpl-1cc5f51f2415492dba65c12ba90f5426-0.
INFO 03-02 00:48:25 [logger.py:42] Received request cmpl-4360cd925be54a8a8fcb3f5e9b096cfc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:25 [async_llm.py:261] Added request cmpl-4360cd925be54a8a8fcb3f5e9b096cfc-0.
INFO 03-02 00:48:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:48:27 [logger.py:42] Received request cmpl-866557d4f4fe43a3aaa8c5ab9475b473-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:27 [async_llm.py:261] Added request cmpl-866557d4f4fe43a3aaa8c5ab9475b473-0.
INFO 03-02 00:48:28 [logger.py:42] Received request cmpl-2527ca5454834761880edba93f0a68c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:28 [async_llm.py:261] Added request cmpl-2527ca5454834761880edba93f0a68c2-0.
INFO 03-02 00:48:29 [logger.py:42] Received request cmpl-c249b2c578e3417b8d0de9de215879c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:29 [async_llm.py:261] Added request cmpl-c249b2c578e3417b8d0de9de215879c0-0.
INFO 03-02 00:48:30 [logger.py:42] Received request cmpl-cce28f93f5854b69bc704fdf5b32e5e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:30 [async_llm.py:261] Added request cmpl-cce28f93f5854b69bc704fdf5b32e5e9-0.
INFO 03-02 00:48:31 [logger.py:42] Received request cmpl-a085b82cf3164178b403ec40a74fa519-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:31 [async_llm.py:261] Added request cmpl-a085b82cf3164178b403ec40a74fa519-0.
INFO 03-02 00:48:32 [logger.py:42] Received request cmpl-b3179a778bc24466b34f5bb3d223fdb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:32 [async_llm.py:261] Added request cmpl-b3179a778bc24466b34f5bb3d223fdb9-0.
INFO 03-02 00:48:33 [logger.py:42] Received request cmpl-2763d8790daa443396b6ba9b5546064c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:33 [async_llm.py:261] Added request cmpl-2763d8790daa443396b6ba9b5546064c-0.
INFO 03-02 00:48:34 [logger.py:42] Received request cmpl-1c36de41afaf4c80b3b0b7503307b587-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:34 [async_llm.py:261] Added request cmpl-1c36de41afaf4c80b3b0b7503307b587-0.
INFO 03-02 00:48:35 [logger.py:42] Received request cmpl-2e019d94fa2146ea9054dae05dd4df3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:35 [async_llm.py:261] Added request cmpl-2e019d94fa2146ea9054dae05dd4df3b-0.
INFO 03-02 00:48:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:48:36 [logger.py:42] Received request cmpl-e9e52b29f86a45088ad4dca0d152b65e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:36 [async_llm.py:261] Added request cmpl-e9e52b29f86a45088ad4dca0d152b65e-0.
INFO 03-02 00:48:37 [logger.py:42] Received request cmpl-5a10788077be482aa7f1e620c52c3b5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:37 [async_llm.py:261] Added request cmpl-5a10788077be482aa7f1e620c52c3b5d-0.
INFO 03-02 00:48:38 [logger.py:42] Received request cmpl-dffeb07faee147ae8862b153f9a48dde-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:38 [async_llm.py:261] Added request cmpl-dffeb07faee147ae8862b153f9a48dde-0.
INFO 03-02 00:48:40 [logger.py:42] Received request cmpl-4d3db7d8cdb24c78a88a1cf60ce2044e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:40 [async_llm.py:261] Added request cmpl-4d3db7d8cdb24c78a88a1cf60ce2044e-0.
INFO 03-02 00:48:41 [logger.py:42] Received request cmpl-b218b43a4b84409787ba3f0d69b02ab4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:41 [async_llm.py:261] Added request cmpl-b218b43a4b84409787ba3f0d69b02ab4-0.
INFO 03-02 00:48:42 [logger.py:42] Received request cmpl-6db61e5506d84378a9bb2bf742359228-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:42 [async_llm.py:261] Added request cmpl-6db61e5506d84378a9bb2bf742359228-0.
INFO 03-02 00:48:43 [logger.py:42] Received request cmpl-12544504bcc240e4895a5ac8e1533ecb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:43 [async_llm.py:261] Added request cmpl-12544504bcc240e4895a5ac8e1533ecb-0.
INFO 03-02 00:48:44 [logger.py:42] Received request cmpl-2db2daa6fce74696a0a1295ec6b2079f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:44 [async_llm.py:261] Added request cmpl-2db2daa6fce74696a0a1295ec6b2079f-0.
INFO 03-02 00:48:45 [logger.py:42] Received request cmpl-87f1cca396d34495a9e3617beb34bfd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:45 [async_llm.py:261] Added request cmpl-87f1cca396d34495a9e3617beb34bfd5-0.
INFO 03-02 00:48:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:48:46 [logger.py:42] Received request cmpl-db7a2f3c6f3448bc80e648f178cc4b78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:46 [async_llm.py:261] Added request cmpl-db7a2f3c6f3448bc80e648f178cc4b78-0.
INFO 03-02 00:48:47 [logger.py:42] Received request cmpl-9b8ebc42a608473d84434b1b71a68bf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:47 [async_llm.py:261] Added request cmpl-9b8ebc42a608473d84434b1b71a68bf4-0.
INFO 03-02 00:48:48 [logger.py:42] Received request cmpl-c7c7565edc8b499c9ea4408d54bd51c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:48 [async_llm.py:261] Added request cmpl-c7c7565edc8b499c9ea4408d54bd51c7-0.
INFO 03-02 00:48:49 [logger.py:42] Received request cmpl-ee24afcd26ff4f64ba4d8df4485e3e0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:49 [async_llm.py:261] Added request cmpl-ee24afcd26ff4f64ba4d8df4485e3e0f-0.
INFO 03-02 00:48:50 [logger.py:42] Received request cmpl-3af24f6f5405473abada0b284e9480cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:50 [async_llm.py:261] Added request cmpl-3af24f6f5405473abada0b284e9480cd-0.
INFO 03-02 00:48:51 [logger.py:42] Received request cmpl-aa3816e5934640cd893ca30aa568c1d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:51 [async_llm.py:261] Added request cmpl-aa3816e5934640cd893ca30aa568c1d8-0.
INFO 03-02 00:48:53 [logger.py:42] Received request cmpl-0160f3ab91d6422b87213e855d728fea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:53 [async_llm.py:261] Added request cmpl-0160f3ab91d6422b87213e855d728fea-0.
INFO 03-02 00:48:54 [logger.py:42] Received request cmpl-75fb8df4cddf4455a7cea09f65084f32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:54 [async_llm.py:261] Added request cmpl-75fb8df4cddf4455a7cea09f65084f32-0.
INFO 03-02 00:48:55 [logger.py:42] Received request cmpl-a907cf6ce385475da66caaf1c2d617e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:55 [async_llm.py:261] Added request cmpl-a907cf6ce385475da66caaf1c2d617e8-0.
INFO 03-02 00:48:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:48:56 [logger.py:42] Received request cmpl-ff92d632dbd943228360c3898abc0150-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:56 [async_llm.py:261] Added request cmpl-ff92d632dbd943228360c3898abc0150-0.
INFO 03-02 00:48:57 [logger.py:42] Received request cmpl-83e311aa34ce435a91ddf3c818c7ebcb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:57 [async_llm.py:261] Added request cmpl-83e311aa34ce435a91ddf3c818c7ebcb-0.
INFO 03-02 00:48:58 [logger.py:42] Received request cmpl-f2b65a4b92d04182b7225fa0cf707090-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:58 [async_llm.py:261] Added request cmpl-f2b65a4b92d04182b7225fa0cf707090-0.
INFO 03-02 00:48:59 [logger.py:42] Received request cmpl-7b6647331f4347bd970c93ffe2d5ebe1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:59 [async_llm.py:261] Added request cmpl-7b6647331f4347bd970c93ffe2d5ebe1-0.
INFO 03-02 00:49:00 [logger.py:42] Received request cmpl-9027c8256deb4b48a419373f5ec455ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:00 [async_llm.py:261] Added request cmpl-9027c8256deb4b48a419373f5ec455ef-0.
INFO 03-02 00:49:01 [logger.py:42] Received request cmpl-a62c2a052faa4ec5b901625d70974b4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:01 [async_llm.py:261] Added request cmpl-a62c2a052faa4ec5b901625d70974b4f-0.
INFO 03-02 00:49:02 [logger.py:42] Received request cmpl-4aecfd1b50644ad793a2f802f579787d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:02 [async_llm.py:261] Added request cmpl-4aecfd1b50644ad793a2f802f579787d-0.
INFO 03-02 00:49:03 [logger.py:42] Received request cmpl-5a1a3fe2c86847d9afa05909a14c0801-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:03 [async_llm.py:261] Added request cmpl-5a1a3fe2c86847d9afa05909a14c0801-0.
INFO 03-02 00:49:04 [logger.py:42] Received request cmpl-a2a3b71bc5af431a89b5d935dad2f7b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:04 [async_llm.py:261] Added request cmpl-a2a3b71bc5af431a89b5d935dad2f7b8-0.
INFO 03-02 00:49:06 [logger.py:42] Received request cmpl-03ed6b8bcd874f83aa63817519bf6a54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:06 [async_llm.py:261] Added request cmpl-03ed6b8bcd874f83aa63817519bf6a54-0.
INFO 03-02 00:49:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:49:07 [logger.py:42] Received request cmpl-ae09f33d51514f1386e16e30d8e3e552-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:07 [async_llm.py:261] Added request cmpl-ae09f33d51514f1386e16e30d8e3e552-0.
INFO 03-02 00:49:08 [logger.py:42] Received request cmpl-35fb15317b08491bad46aff214539309-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:08 [async_llm.py:261] Added request cmpl-35fb15317b08491bad46aff214539309-0.
INFO 03-02 00:49:09 [logger.py:42] Received request cmpl-ef926e938a24436fbdfaecf030212660-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:09 [async_llm.py:261] Added request cmpl-ef926e938a24436fbdfaecf030212660-0.
INFO 03-02 00:49:10 [logger.py:42] Received request cmpl-2f8c96ba07584b9395e04dc9294a82da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:10 [async_llm.py:261] Added request cmpl-2f8c96ba07584b9395e04dc9294a82da-0.
INFO 03-02 00:49:11 [logger.py:42] Received request cmpl-24a7719a459d4958bf71c01a4d1bd6f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:11 [async_llm.py:261] Added request cmpl-24a7719a459d4958bf71c01a4d1bd6f6-0.
INFO 03-02 00:49:12 [logger.py:42] Received request cmpl-2c7010311bb64c9ab2727fda8ad217ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:12 [async_llm.py:261] Added request cmpl-2c7010311bb64c9ab2727fda8ad217ba-0.
INFO 03-02 00:49:13 [logger.py:42] Received request cmpl-89d3e1fb19cd48c08b392a64f0bf0cf7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:13 [async_llm.py:261] Added request cmpl-89d3e1fb19cd48c08b392a64f0bf0cf7-0.
INFO 03-02 00:49:14 [logger.py:42] Received request cmpl-32b39a5397374bf2be1712158b40a189-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:14 [async_llm.py:261] Added request cmpl-32b39a5397374bf2be1712158b40a189-0.
INFO 03-02 00:49:15 [logger.py:42] Received request cmpl-c57ad322e69249a58e4edd008fd6142e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:15 [async_llm.py:261] Added request cmpl-c57ad322e69249a58e4edd008fd6142e-0.
INFO 03-02 00:49:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:49:16 [logger.py:42] Received request cmpl-e9ad62ff8c784988b61d18740a2ac786-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:16 [async_llm.py:261] Added request cmpl-e9ad62ff8c784988b61d18740a2ac786-0.
INFO 03-02 00:49:18 [logger.py:42] Received request cmpl-8e90c6db897843b4b0d960f9311f82fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:18 [async_llm.py:261] Added request cmpl-8e90c6db897843b4b0d960f9311f82fa-0.
INFO 03-02 00:49:19 [logger.py:42] Received request cmpl-f787322bd4364d0696701dd52d9b7bd2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:19 [async_llm.py:261] Added request cmpl-f787322bd4364d0696701dd52d9b7bd2-0.
INFO 03-02 00:49:20 [logger.py:42] Received request cmpl-f05603b9acd7416e861ac27113a66d16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:20 [async_llm.py:261] Added request cmpl-f05603b9acd7416e861ac27113a66d16-0.
INFO 03-02 00:49:21 [logger.py:42] Received request cmpl-f739fa1cfacd4b918aa0867d801194f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:21 [async_llm.py:261] Added request cmpl-f739fa1cfacd4b918aa0867d801194f8-0.
INFO 03-02 00:49:22 [logger.py:42] Received request cmpl-a2a8a4ae577548d7bdc3792615b2be43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:22 [async_llm.py:261] Added request cmpl-a2a8a4ae577548d7bdc3792615b2be43-0.
INFO 03-02 00:49:23 [logger.py:42] Received request cmpl-a9ce733d0e124db89f72e02c15171a3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:23 [async_llm.py:261] Added request cmpl-a9ce733d0e124db89f72e02c15171a3c-0.
INFO 03-02 00:49:24 [logger.py:42] Received request cmpl-c070b25ecb1a4a19ba23151975dce393-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:24 [async_llm.py:261] Added request cmpl-c070b25ecb1a4a19ba23151975dce393-0.
INFO 03-02 00:49:25 [logger.py:42] Received request cmpl-152a8650decf4d1e928b73113336a45a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:25 [async_llm.py:261] Added request cmpl-152a8650decf4d1e928b73113336a45a-0.
INFO 03-02 00:49:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:49:26 [logger.py:42] Received request cmpl-154df764368a4884aa31daa00e10e033-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:26 [async_llm.py:261] Added request cmpl-154df764368a4884aa31daa00e10e033-0.
INFO 03-02 00:49:27 [logger.py:42] Received request cmpl-9a7067ce33d042d98dc5a3c243a8f4f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:27 [async_llm.py:261] Added request cmpl-9a7067ce33d042d98dc5a3c243a8f4f1-0.
INFO 03-02 00:49:28 [logger.py:42] Received request cmpl-3d87e46612ad479a8e57498895750f0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:28 [async_llm.py:261] Added request cmpl-3d87e46612ad479a8e57498895750f0e-0.
INFO 03-02 00:49:29 [logger.py:42] Received request cmpl-70b1e1780c764335a25e7f963f6c1233-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:29 [async_llm.py:261] Added request cmpl-70b1e1780c764335a25e7f963f6c1233-0.
INFO 03-02 00:49:31 [logger.py:42] Received request cmpl-d627b7a4c5e543219822dfc6670d8297-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:31 [async_llm.py:261] Added request cmpl-d627b7a4c5e543219822dfc6670d8297-0.
INFO 03-02 00:49:32 [logger.py:42] Received request cmpl-78c6edb659dd412ca38894427171e013-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:32 [async_llm.py:261] Added request cmpl-78c6edb659dd412ca38894427171e013-0.
INFO 03-02 00:49:33 [logger.py:42] Received request cmpl-3f8958eca376407b9b0bb0217a176668-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:33 [async_llm.py:261] Added request cmpl-3f8958eca376407b9b0bb0217a176668-0.
INFO 03-02 00:49:34 [logger.py:42] Received request cmpl-15c6acd8aeb14bf0b9055a5f4103222b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:34 [async_llm.py:261] Added request cmpl-15c6acd8aeb14bf0b9055a5f4103222b-0.
INFO 03-02 00:49:35 [logger.py:42] Received request cmpl-a5b4f62580fb409a8718154c480b557d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:35 [async_llm.py:261] Added request cmpl-a5b4f62580fb409a8718154c480b557d-0.
INFO 03-02 00:49:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:49:36 [logger.py:42] Received request cmpl-1415c3c90b2f4b08969aae6890b9271e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:36 [async_llm.py:261] Added request cmpl-1415c3c90b2f4b08969aae6890b9271e-0.
INFO 03-02 00:49:37 [logger.py:42] Received request cmpl-74ec41f2ee4945e0bf9f94a410204a9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:37 [async_llm.py:261] Added request cmpl-74ec41f2ee4945e0bf9f94a410204a9e-0.
INFO 03-02 00:49:38 [logger.py:42] Received request cmpl-d0dcb3df87164b5699b4f03905fe1cca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:38 [async_llm.py:261] Added request cmpl-d0dcb3df87164b5699b4f03905fe1cca-0.
INFO 03-02 00:49:39 [logger.py:42] Received request cmpl-408b788efc664ff49343184c102b724d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:39 [async_llm.py:261] Added request cmpl-408b788efc664ff49343184c102b724d-0.
INFO 03-02 00:49:40 [logger.py:42] Received request cmpl-9164df9fe5424825b8503238c76f0022-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:40 [async_llm.py:261] Added request cmpl-9164df9fe5424825b8503238c76f0022-0.
INFO 03-02 00:49:41 [logger.py:42] Received request cmpl-0cc0558eef7a43e39dbaf08129a0a2e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:41 [async_llm.py:261] Added request cmpl-0cc0558eef7a43e39dbaf08129a0a2e7-0.
INFO 03-02 00:49:42 [logger.py:42] Received request cmpl-f684952222964ae2af3f51bc30719437-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:42 [async_llm.py:261] Added request cmpl-f684952222964ae2af3f51bc30719437-0.
INFO 03-02 00:49:44 [logger.py:42] Received request cmpl-0a36af89ab7e4a00bae382724f834071-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:44 [async_llm.py:261] Added request cmpl-0a36af89ab7e4a00bae382724f834071-0.
INFO 03-02 00:49:45 [logger.py:42] Received request cmpl-ff5be52fd1ea4f0ea4194dced3271e90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:45 [async_llm.py:261] Added request cmpl-ff5be52fd1ea4f0ea4194dced3271e90-0.
INFO 03-02 00:49:46 [logger.py:42] Received request cmpl-ef613bb149c24cb3b821995438286137-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:46 [async_llm.py:261] Added request cmpl-ef613bb149c24cb3b821995438286137-0.
INFO 03-02 00:49:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:49:47 [logger.py:42] Received request cmpl-49ee3dfd24ca4272be996407ccc3447e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:47 [async_llm.py:261] Added request cmpl-49ee3dfd24ca4272be996407ccc3447e-0.
INFO 03-02 00:49:48 [logger.py:42] Received request cmpl-c44d67a000ae46278c28e85bd4ccdac1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:48 [async_llm.py:261] Added request cmpl-c44d67a000ae46278c28e85bd4ccdac1-0.
INFO 03-02 00:49:49 [logger.py:42] Received request cmpl-611d8824f6ea4b1686c45e769a8b8489-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:49 [async_llm.py:261] Added request cmpl-611d8824f6ea4b1686c45e769a8b8489-0.
INFO 03-02 00:49:50 [logger.py:42] Received request cmpl-c0d3c3a9cf1e43c59aba073cad111c4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:50 [async_llm.py:261] Added request cmpl-c0d3c3a9cf1e43c59aba073cad111c4b-0.
INFO 03-02 00:49:51 [logger.py:42] Received request cmpl-23396d7a8a544edbb83d7fd193f95402-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:51 [async_llm.py:261] Added request cmpl-23396d7a8a544edbb83d7fd193f95402-0.
INFO 03-02 00:49:52 [logger.py:42] Received request cmpl-6c9b0a5cca384c34a82bdb9d90a2ab46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:52 [async_llm.py:261] Added request cmpl-6c9b0a5cca384c34a82bdb9d90a2ab46-0.
INFO 03-02 00:49:53 [logger.py:42] Received request cmpl-de7a06272a78436daee81408021ae754-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:53 [async_llm.py:261] Added request cmpl-de7a06272a78436daee81408021ae754-0.
INFO 03-02 00:49:54 [logger.py:42] Received request cmpl-f562ca1c395f4c33babbccc939030d8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:54 [async_llm.py:261] Added request cmpl-f562ca1c395f4c33babbccc939030d8b-0.
INFO 03-02 00:49:55 [logger.py:42] Received request cmpl-e1d347424782461fb42faf69cb9c1819-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:55 [async_llm.py:261] Added request cmpl-e1d347424782461fb42faf69cb9c1819-0.
INFO 03-02 00:49:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:49:57 [logger.py:42] Received request cmpl-b349c6d4506a484ea428b3554e93e30f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:57 [async_llm.py:261] Added request cmpl-b349c6d4506a484ea428b3554e93e30f-0.
INFO 03-02 00:49:58 [logger.py:42] Received request cmpl-f4d5f02075664b5a8bba51c7264eb3c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:58 [async_llm.py:261] Added request cmpl-f4d5f02075664b5a8bba51c7264eb3c7-0.
INFO 03-02 00:49:59 [logger.py:42] Received request cmpl-df403c8479524b24b77b6b8e790d2d34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:59 [async_llm.py:261] Added request cmpl-df403c8479524b24b77b6b8e790d2d34-0.
INFO 03-02 00:50:00 [logger.py:42] Received request cmpl-3fc91d46e8d6461986b01959f48294ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:00 [async_llm.py:261] Added request cmpl-3fc91d46e8d6461986b01959f48294ae-0.
INFO 03-02 00:50:01 [logger.py:42] Received request cmpl-b94cf7cea8704647942dd0a839c2c5b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:01 [async_llm.py:261] Added request cmpl-b94cf7cea8704647942dd0a839c2c5b7-0.
INFO 03-02 00:50:02 [logger.py:42] Received request cmpl-074b6ed6c2ef4a489014722ac361cb93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:02 [async_llm.py:261] Added request cmpl-074b6ed6c2ef4a489014722ac361cb93-0.
INFO 03-02 00:50:03 [logger.py:42] Received request cmpl-5b04a874d7194f06815857276776301e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:03 [async_llm.py:261] Added request cmpl-5b04a874d7194f06815857276776301e-0.
INFO 03-02 00:50:04 [logger.py:42] Received request cmpl-09d51d4c2d2845e5a1e98fa37f383677-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:04 [async_llm.py:261] Added request cmpl-09d51d4c2d2845e5a1e98fa37f383677-0.
INFO 03-02 00:50:05 [logger.py:42] Received request cmpl-bf85e81e533b4d3885e35b7a0f90bf03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:05 [async_llm.py:261] Added request cmpl-bf85e81e533b4d3885e35b7a0f90bf03-0.
INFO 03-02 00:50:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:50:06 [logger.py:42] Received request cmpl-a49a8ad164f0489c9e1871a461b82638-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:06 [async_llm.py:261] Added request cmpl-a49a8ad164f0489c9e1871a461b82638-0.
INFO 03-02 00:50:07 [logger.py:42] Received request cmpl-06c87681d2ff42ccb73877ed5f7b49b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:07 [async_llm.py:261] Added request cmpl-06c87681d2ff42ccb73877ed5f7b49b8-0.
INFO 03-02 00:50:08 [logger.py:42] Received request cmpl-6cc4e411b11b4b78b466004aed09ccd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:08 [async_llm.py:261] Added request cmpl-6cc4e411b11b4b78b466004aed09ccd8-0.
INFO 03-02 00:50:10 [logger.py:42] Received request cmpl-553f368a995f4119a002344a9f6795e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:10 [async_llm.py:261] Added request cmpl-553f368a995f4119a002344a9f6795e3-0.
INFO 03-02 00:50:11 [logger.py:42] Received request cmpl-04f988521c684d7ab6736190be2a49b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:11 [async_llm.py:261] Added request cmpl-04f988521c684d7ab6736190be2a49b1-0.
INFO 03-02 00:50:12 [logger.py:42] Received request cmpl-f094118adaec43fea9e9f1468aad7662-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:12 [async_llm.py:261] Added request cmpl-f094118adaec43fea9e9f1468aad7662-0.
INFO 03-02 00:50:13 [logger.py:42] Received request cmpl-9e02219c6b03429682d27faa6fe3b700-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:13 [async_llm.py:261] Added request cmpl-9e02219c6b03429682d27faa6fe3b700-0.
INFO 03-02 00:50:14 [logger.py:42] Received request cmpl-9c7d7b5f511147cca1ad598f3501781e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:14 [async_llm.py:261] Added request cmpl-9c7d7b5f511147cca1ad598f3501781e-0.
INFO 03-02 00:50:15 [logger.py:42] Received request cmpl-22ae0265286c4151bfaa6e92afe71226-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:15 [async_llm.py:261] Added request cmpl-22ae0265286c4151bfaa6e92afe71226-0.
INFO 03-02 00:50:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:50:16 [logger.py:42] Received request cmpl-76331f3614b943988e02771b05e10214-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:16 [async_llm.py:261] Added request cmpl-76331f3614b943988e02771b05e10214-0.
INFO 03-02 00:50:17 [logger.py:42] Received request cmpl-47fc6ddf30fd4b3fb4f2f567c570999b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:17 [async_llm.py:261] Added request cmpl-47fc6ddf30fd4b3fb4f2f567c570999b-0.
INFO 03-02 00:50:18 [logger.py:42] Received request cmpl-b44ac128025448538b5054531a331c39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:18 [async_llm.py:261] Added request cmpl-b44ac128025448538b5054531a331c39-0.
INFO 03-02 00:50:19 [logger.py:42] Received request cmpl-8bc0040a2c9d41708f5107a0a2d9c0c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:19 [async_llm.py:261] Added request cmpl-8bc0040a2c9d41708f5107a0a2d9c0c9-0.
INFO 03-02 00:50:20 [logger.py:42] Received request cmpl-691d23514bd441dc9efa7e6fe4f8c1f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:20 [async_llm.py:261] Added request cmpl-691d23514bd441dc9efa7e6fe4f8c1f5-0.
INFO 03-02 00:50:21 [logger.py:42] Received request cmpl-f53d48f07e9a4ba292f580cce1620961-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:21 [async_llm.py:261] Added request cmpl-f53d48f07e9a4ba292f580cce1620961-0.
INFO 03-02 00:50:23 [logger.py:42] Received request cmpl-c19cf8ccdd1e4864a88da6a7df16d31c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:23 [async_llm.py:261] Added request cmpl-c19cf8ccdd1e4864a88da6a7df16d31c-0.
INFO 03-02 00:50:24 [logger.py:42] Received request cmpl-1d650fdf1cda465f8e0563cbf4c92db6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:24 [async_llm.py:261] Added request cmpl-1d650fdf1cda465f8e0563cbf4c92db6-0.
INFO 03-02 00:50:25 [logger.py:42] Received request cmpl-a12b612c6412488797e8562fdb4049f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:25 [async_llm.py:261] Added request cmpl-a12b612c6412488797e8562fdb4049f7-0.
INFO 03-02 00:50:26 [logger.py:42] Received request cmpl-d88173d0a37c4f06a3435191f57f2bf5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:26 [async_llm.py:261] Added request cmpl-d88173d0a37c4f06a3435191f57f2bf5-0.
INFO 03-02 00:50:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:50:27 [logger.py:42] Received request cmpl-a95fdb786ea347268cae53b85d98b9fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:27 [async_llm.py:261] Added request cmpl-a95fdb786ea347268cae53b85d98b9fb-0.
INFO 03-02 00:50:28 [logger.py:42] Received request cmpl-1e5dcfa02f7243a8a15cd33a4ee5f344-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:28 [async_llm.py:261] Added request cmpl-1e5dcfa02f7243a8a15cd33a4ee5f344-0.
INFO 03-02 00:50:29 [logger.py:42] Received request cmpl-1415369f5a0b4efabe7c75f5c5d80e9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:29 [async_llm.py:261] Added request cmpl-1415369f5a0b4efabe7c75f5c5d80e9d-0.
INFO 03-02 00:50:30 [logger.py:42] Received request cmpl-76c073fd49534d4aa9b32a53d1dadc5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:30 [async_llm.py:261] Added request cmpl-76c073fd49534d4aa9b32a53d1dadc5b-0.
INFO 03-02 00:50:31 [logger.py:42] Received request cmpl-4d71a265c9a34bfd8b3cca1900d3355b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:31 [async_llm.py:261] Added request cmpl-4d71a265c9a34bfd8b3cca1900d3355b-0.
INFO 03-02 00:50:32 [logger.py:42] Received request cmpl-d24032c6da1a4c719ee19b609f652bc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:32 [async_llm.py:261] Added request cmpl-d24032c6da1a4c719ee19b609f652bc9-0.
INFO 03-02 00:50:33 [logger.py:42] Received request cmpl-c3fbfc3ecb1241de8fa2c35474ba2a5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:33 [async_llm.py:261] Added request cmpl-c3fbfc3ecb1241de8fa2c35474ba2a5f-0.
INFO 03-02 00:50:34 [logger.py:42] Received request cmpl-6c841208f9924885b54fa8b69f27174f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:34 [async_llm.py:261] Added request cmpl-6c841208f9924885b54fa8b69f27174f-0.
INFO 03-02 00:50:36 [logger.py:42] Received request cmpl-df408f9c7935416aa2d0d7010d463c23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:36 [async_llm.py:261] Added request cmpl-df408f9c7935416aa2d0d7010d463c23-0.
INFO 03-02 00:50:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:50:37 [logger.py:42] Received request cmpl-47e3cd51fc334a4db5756e140afd14cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:37 [async_llm.py:261] Added request cmpl-47e3cd51fc334a4db5756e140afd14cc-0.
INFO 03-02 00:50:38 [logger.py:42] Received request cmpl-4ca8e84e06374bd7aad52664de2eba62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:38 [async_llm.py:261] Added request cmpl-4ca8e84e06374bd7aad52664de2eba62-0.
INFO 03-02 00:50:39 [logger.py:42] Received request cmpl-a0f60ecf79894642ab1d3bd4ec861ea8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:39 [async_llm.py:261] Added request cmpl-a0f60ecf79894642ab1d3bd4ec861ea8-0.
INFO 03-02 00:50:40 [logger.py:42] Received request cmpl-8cfb1052909b468f97d2f7f1fcb2d051-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:40 [async_llm.py:261] Added request cmpl-8cfb1052909b468f97d2f7f1fcb2d051-0.
INFO 03-02 00:50:41 [logger.py:42] Received request cmpl-134e5eae4cda45cc924798ccc5a90d0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:41 [async_llm.py:261] Added request cmpl-134e5eae4cda45cc924798ccc5a90d0c-0.
INFO 03-02 00:50:42 [logger.py:42] Received request cmpl-91682ac3c5ca429a8d694c577e18129d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:42 [async_llm.py:261] Added request cmpl-91682ac3c5ca429a8d694c577e18129d-0.
INFO 03-02 00:50:43 [logger.py:42] Received request cmpl-ba17d26af7e54e19a81bf9b0ce7e9605-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:43 [async_llm.py:261] Added request cmpl-ba17d26af7e54e19a81bf9b0ce7e9605-0.
INFO 03-02 00:50:44 [logger.py:42] Received request cmpl-cde57e9f008d4c418dc1b50f605171ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:44 [async_llm.py:261] Added request cmpl-cde57e9f008d4c418dc1b50f605171ce-0.
INFO 03-02 00:50:45 [logger.py:42] Received request cmpl-d2b9a0df4d9447389ecdb58b68e5e37b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:45 [async_llm.py:261] Added request cmpl-d2b9a0df4d9447389ecdb58b68e5e37b-0.
INFO 03-02 00:50:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:50:46 [logger.py:42] Received request cmpl-988ad19882bb4cc18b1f15a1841f14d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:46 [async_llm.py:261] Added request cmpl-988ad19882bb4cc18b1f15a1841f14d2-0.
INFO 03-02 00:50:47 [logger.py:42] Received request cmpl-7a0a0012685e4785ab4ccd89bc225867-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:47 [async_llm.py:261] Added request cmpl-7a0a0012685e4785ab4ccd89bc225867-0.
INFO 03-02 00:50:49 [logger.py:42] Received request cmpl-a16ca9d01d1e47e5a7b1fa108b89b3ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:49 [async_llm.py:261] Added request cmpl-a16ca9d01d1e47e5a7b1fa108b89b3ad-0.
INFO 03-02 00:50:50 [logger.py:42] Received request cmpl-e2c0c1bb5853414d825b229ce25fa325-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:50 [async_llm.py:261] Added request cmpl-e2c0c1bb5853414d825b229ce25fa325-0.
INFO 03-02 00:50:51 [logger.py:42] Received request cmpl-0c4c38dc42694274ab80ad1c12231963-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:51 [async_llm.py:261] Added request cmpl-0c4c38dc42694274ab80ad1c12231963-0.
INFO 03-02 00:50:52 [logger.py:42] Received request cmpl-3e44be2800c54c86a81b7d414ad8b9f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:52 [async_llm.py:261] Added request cmpl-3e44be2800c54c86a81b7d414ad8b9f9-0.
INFO 03-02 00:50:53 [logger.py:42] Received request cmpl-e7372b4a5c884a8b8b3c46f079b23c34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:53 [async_llm.py:261] Added request cmpl-e7372b4a5c884a8b8b3c46f079b23c34-0.
INFO 03-02 00:50:54 [logger.py:42] Received request cmpl-bc897d338d024d4d87f5f0deddbd2aa4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:54 [async_llm.py:261] Added request cmpl-bc897d338d024d4d87f5f0deddbd2aa4-0.
INFO 03-02 00:50:55 [logger.py:42] Received request cmpl-e33154f24d1749d6993b284426d8bf31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:55 [async_llm.py:261] Added request cmpl-e33154f24d1749d6993b284426d8bf31-0.
INFO 03-02 00:50:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:50:56 [logger.py:42] Received request cmpl-339e3a3aa3e34b95932df9e1171c3416-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:56 [async_llm.py:261] Added request cmpl-339e3a3aa3e34b95932df9e1171c3416-0.
INFO 03-02 00:50:57 [logger.py:42] Received request cmpl-ad6a1771eb1f46b5b6af27753967a5db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:57 [async_llm.py:261] Added request cmpl-ad6a1771eb1f46b5b6af27753967a5db-0.
INFO 03-02 00:50:58 [logger.py:42] Received request cmpl-6f5b3e1661a24feaa7b121dbde77b333-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:58 [async_llm.py:261] Added request cmpl-6f5b3e1661a24feaa7b121dbde77b333-0.
INFO 03-02 00:50:59 [logger.py:42] Received request cmpl-5f49edb679724a5297f4e8fe4514543a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:59 [async_llm.py:261] Added request cmpl-5f49edb679724a5297f4e8fe4514543a-0.
INFO 03-02 00:51:00 [logger.py:42] Received request cmpl-aa2fbce044dc4d84a2371079c5464ac2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:00 [async_llm.py:261] Added request cmpl-aa2fbce044dc4d84a2371079c5464ac2-0.
INFO 03-02 00:51:02 [logger.py:42] Received request cmpl-4db5bbaf41c843d0b6f3fe55e23c568a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:02 [async_llm.py:261] Added request cmpl-4db5bbaf41c843d0b6f3fe55e23c568a-0.
INFO 03-02 00:51:03 [logger.py:42] Received request cmpl-db815692bd00443cb1aa138fb1262224-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:03 [async_llm.py:261] Added request cmpl-db815692bd00443cb1aa138fb1262224-0.
INFO 03-02 00:51:04 [logger.py:42] Received request cmpl-2ac6085d7fa14479981d1b42fcd100f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:04 [async_llm.py:261] Added request cmpl-2ac6085d7fa14479981d1b42fcd100f7-0.
INFO 03-02 00:51:05 [logger.py:42] Received request cmpl-4a589514afc24f16b609d55a1af05526-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:05 [async_llm.py:261] Added request cmpl-4a589514afc24f16b609d55a1af05526-0.
INFO 03-02 00:51:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:51:06 [logger.py:42] Received request cmpl-3295c364970d4198942e9cee926822cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:06 [async_llm.py:261] Added request cmpl-3295c364970d4198942e9cee926822cd-0.
INFO 03-02 00:51:07 [logger.py:42] Received request cmpl-1f0bb5b584a3466c9de79a18514d77ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:07 [async_llm.py:261] Added request cmpl-1f0bb5b584a3466c9de79a18514d77ea-0.
INFO 03-02 00:51:08 [logger.py:42] Received request cmpl-329585825ba84b28aae835cfb3d37c42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:08 [async_llm.py:261] Added request cmpl-329585825ba84b28aae835cfb3d37c42-0.
INFO 03-02 00:51:09 [logger.py:42] Received request cmpl-fd150146c8334ed48f5742502d2213fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:09 [async_llm.py:261] Added request cmpl-fd150146c8334ed48f5742502d2213fc-0.
INFO 03-02 00:51:10 [logger.py:42] Received request cmpl-a7139fa15d344e1eab88fca02162df4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:10 [async_llm.py:261] Added request cmpl-a7139fa15d344e1eab88fca02162df4c-0.
INFO 03-02 00:51:11 [logger.py:42] Received request cmpl-2608c81a02d04db3a2344179e822e3a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:11 [async_llm.py:261] Added request cmpl-2608c81a02d04db3a2344179e822e3a3-0.
INFO 03-02 00:51:12 [logger.py:42] Received request cmpl-fa7d14cf8a7743a98389f7d4eb739468-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:12 [async_llm.py:261] Added request cmpl-fa7d14cf8a7743a98389f7d4eb739468-0.
INFO 03-02 00:51:13 [logger.py:42] Received request cmpl-af6c9c23620e47fe8855d405c81fed93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:14 [async_llm.py:261] Added request cmpl-af6c9c23620e47fe8855d405c81fed93-0.
INFO 03-02 00:51:15 [logger.py:42] Received request cmpl-f0da4df41576428dbe8af06043ba821a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:15 [async_llm.py:261] Added request cmpl-f0da4df41576428dbe8af06043ba821a-0.
INFO 03-02 00:51:16 [logger.py:42] Received request cmpl-57dab3d7f0034e32a3adeab32581ce0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:16 [async_llm.py:261] Added request cmpl-57dab3d7f0034e32a3adeab32581ce0f-0.
INFO 03-02 00:51:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:51:17 [logger.py:42] Received request cmpl-9f48a4baa6264c839a37fa985a1fc570-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:17 [async_llm.py:261] Added request cmpl-9f48a4baa6264c839a37fa985a1fc570-0.
INFO 03-02 00:51:18 [logger.py:42] Received request cmpl-0e1a13b708f14ee4909151801b1826ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:18 [async_llm.py:261] Added request cmpl-0e1a13b708f14ee4909151801b1826ff-0.
INFO 03-02 00:51:19 [logger.py:42] Received request cmpl-3adfea068735462188a90c5d14f460d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:19 [async_llm.py:261] Added request cmpl-3adfea068735462188a90c5d14f460d4-0.
INFO 03-02 00:51:20 [logger.py:42] Received request cmpl-fec368c77ff3416e97843cfdac0a789b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:20 [async_llm.py:261] Added request cmpl-fec368c77ff3416e97843cfdac0a789b-0.
INFO 03-02 00:51:21 [logger.py:42] Received request cmpl-b9d4a4fe5acf413ba2ada493025920ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:21 [async_llm.py:261] Added request cmpl-b9d4a4fe5acf413ba2ada493025920ce-0.
INFO 03-02 00:51:22 [logger.py:42] Received request cmpl-8b7e4cf79d394af894246952e99c369d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:22 [async_llm.py:261] Added request cmpl-8b7e4cf79d394af894246952e99c369d-0.
INFO 03-02 00:51:23 [logger.py:42] Received request cmpl-df46468e816b40bbbccd7e8372ff1a7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:23 [async_llm.py:261] Added request cmpl-df46468e816b40bbbccd7e8372ff1a7a-0.
INFO 03-02 00:51:24 [logger.py:42] Received request cmpl-5951c0ef506645f5a19649a711532a48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:24 [async_llm.py:261] Added request cmpl-5951c0ef506645f5a19649a711532a48-0.
INFO 03-02 00:51:25 [logger.py:42] Received request cmpl-e72f6a44c3904d7fbfe73a5f372756b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:25 [async_llm.py:261] Added request cmpl-e72f6a44c3904d7fbfe73a5f372756b5-0.
INFO 03-02 00:51:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:51:27 [logger.py:42] Received request cmpl-bb24268959b54f448de975babf43f002-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:27 [async_llm.py:261] Added request cmpl-bb24268959b54f448de975babf43f002-0.
INFO 03-02 00:51:28 [logger.py:42] Received request cmpl-7768aff7796f415dad3b75d8f7d8b923-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:28 [async_llm.py:261] Added request cmpl-7768aff7796f415dad3b75d8f7d8b923-0.
INFO 03-02 00:51:29 [logger.py:42] Received request cmpl-8f6bd779fea54c5890571bd68de21b9a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:29 [async_llm.py:261] Added request cmpl-8f6bd779fea54c5890571bd68de21b9a-0.
INFO 03-02 00:51:30 [logger.py:42] Received request cmpl-68600636b670467b81ff15343af42cc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:30 [async_llm.py:261] Added request cmpl-68600636b670467b81ff15343af42cc9-0.
INFO 03-02 00:51:31 [logger.py:42] Received request cmpl-937ee799f41e482e8e183a1e36108db6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:31 [async_llm.py:261] Added request cmpl-937ee799f41e482e8e183a1e36108db6-0.
INFO 03-02 00:51:32 [logger.py:42] Received request cmpl-2b01e4b6718e412fa52072aa88e26575-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:32 [async_llm.py:261] Added request cmpl-2b01e4b6718e412fa52072aa88e26575-0.
INFO 03-02 00:51:33 [logger.py:42] Received request cmpl-e358d6aa1e864c1599d12bfed6cc1afe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:33 [async_llm.py:261] Added request cmpl-e358d6aa1e864c1599d12bfed6cc1afe-0.
INFO 03-02 00:51:34 [logger.py:42] Received request cmpl-5b9aac537874474a82f353ebe1cbe367-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:34 [async_llm.py:261] Added request cmpl-5b9aac537874474a82f353ebe1cbe367-0.
INFO 03-02 00:51:35 [logger.py:42] Received request cmpl-41b5143edcd1466bb6daf945c6dd1291-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:35 [async_llm.py:261] Added request cmpl-41b5143edcd1466bb6daf945c6dd1291-0.
INFO 03-02 00:51:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:51:36 [logger.py:42] Received request cmpl-2e002d8b3270424c93102867767ea0e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:36 [async_llm.py:261] Added request cmpl-2e002d8b3270424c93102867767ea0e8-0.
INFO 03-02 00:51:37 [logger.py:42] Received request cmpl-0619da77dc644a63bca8e2278e2e14a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:37 [async_llm.py:261] Added request cmpl-0619da77dc644a63bca8e2278e2e14a1-0.
INFO 03-02 00:51:38 [logger.py:42] Received request cmpl-dc39d74936f745129dc26dbf450faad7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:38 [async_llm.py:261] Added request cmpl-dc39d74936f745129dc26dbf450faad7-0.
INFO 03-02 00:51:40 [logger.py:42] Received request cmpl-98f63c4c1e9d472bbdafd1182cfa6883-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:40 [async_llm.py:261] Added request cmpl-98f63c4c1e9d472bbdafd1182cfa6883-0.
INFO 03-02 00:51:41 [logger.py:42] Received request cmpl-660f9c2bf8f04d8692a2a7dec0e56f2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:41 [async_llm.py:261] Added request cmpl-660f9c2bf8f04d8692a2a7dec0e56f2f-0.
INFO 03-02 00:51:42 [logger.py:42] Received request cmpl-440ea76bf25f45e28d00c28e9f7da1c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:42 [async_llm.py:261] Added request cmpl-440ea76bf25f45e28d00c28e9f7da1c9-0.
INFO 03-02 00:51:43 [logger.py:42] Received request cmpl-9fb0c887efc34e23946a693fae7f7ebb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:43 [async_llm.py:261] Added request cmpl-9fb0c887efc34e23946a693fae7f7ebb-0.
INFO 03-02 00:51:44 [logger.py:42] Received request cmpl-40c10e241d6b4aee95e6f0262e28b0ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:44 [async_llm.py:261] Added request cmpl-40c10e241d6b4aee95e6f0262e28b0ab-0.
INFO 03-02 00:51:45 [logger.py:42] Received request cmpl-3de5c5bdf1084aafae3b8a6f323cb774-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:45 [async_llm.py:261] Added request cmpl-3de5c5bdf1084aafae3b8a6f323cb774-0.
INFO 03-02 00:51:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:51:46 [logger.py:42] Received request cmpl-63d83ef004b84c5eab928e4388c1f154-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:46 [async_llm.py:261] Added request cmpl-63d83ef004b84c5eab928e4388c1f154-0.
INFO 03-02 00:51:47 [logger.py:42] Received request cmpl-30ef43962ff94c7aa9c2bcf1dffd684a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:47 [async_llm.py:261] Added request cmpl-30ef43962ff94c7aa9c2bcf1dffd684a-0.
INFO 03-02 00:51:48 [logger.py:42] Received request cmpl-3793c584bdde458db3cc7511abc438ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:48 [async_llm.py:261] Added request cmpl-3793c584bdde458db3cc7511abc438ab-0.
INFO 03-02 00:51:49 [logger.py:42] Received request cmpl-54018eaaa8da4300a897c4e9206b3d8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:49 [async_llm.py:261] Added request cmpl-54018eaaa8da4300a897c4e9206b3d8e-0.
INFO 03-02 00:51:50 [logger.py:42] Received request cmpl-393961fbfa1b41ed899e864c3a9ee301-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:50 [async_llm.py:261] Added request cmpl-393961fbfa1b41ed899e864c3a9ee301-0.
INFO 03-02 00:51:51 [logger.py:42] Received request cmpl-43e60cdd536342a5997bb00d25c97be4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:51 [async_llm.py:261] Added request cmpl-43e60cdd536342a5997bb00d25c97be4-0.
INFO 03-02 00:51:53 [logger.py:42] Received request cmpl-6f43612bcd3040a0b3147cc26f8290bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:53 [async_llm.py:261] Added request cmpl-6f43612bcd3040a0b3147cc26f8290bb-0.
INFO 03-02 00:51:54 [logger.py:42] Received request cmpl-d72838a0d9b346edaf2422fecb047976-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:54 [async_llm.py:261] Added request cmpl-d72838a0d9b346edaf2422fecb047976-0.
INFO 03-02 00:51:55 [logger.py:42] Received request cmpl-7ae0bf07baf848c9ae1cf7521617d2e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:55 [async_llm.py:261] Added request cmpl-7ae0bf07baf848c9ae1cf7521617d2e6-0.
INFO 03-02 00:51:56 [logger.py:42] Received request cmpl-1a4d69d2495840d6a3f46f848c25ac3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:56 [async_llm.py:261] Added request cmpl-1a4d69d2495840d6a3f46f848c25ac3b-0.
INFO 03-02 00:51:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:51:57 [logger.py:42] Received request cmpl-f9758d8f28cb4bad9ed8857c51ec4670-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:57 [async_llm.py:261] Added request cmpl-f9758d8f28cb4bad9ed8857c51ec4670-0.
INFO 03-02 00:51:58 [logger.py:42] Received request cmpl-337a361cd1ea4ef3bfba2544d1c86998-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:58 [async_llm.py:261] Added request cmpl-337a361cd1ea4ef3bfba2544d1c86998-0.
INFO 03-02 00:51:59 [logger.py:42] Received request cmpl-9e5d8fdbe67c4dd19c99b7b1442344e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:59 [async_llm.py:261] Added request cmpl-9e5d8fdbe67c4dd19c99b7b1442344e3-0.
INFO 03-02 00:52:00 [logger.py:42] Received request cmpl-da2d67ca27c3414ebb3fbdc4cbbebfd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:00 [async_llm.py:261] Added request cmpl-da2d67ca27c3414ebb3fbdc4cbbebfd3-0.
INFO 03-02 00:52:01 [logger.py:42] Received request cmpl-78aaf88a617140698489da3512ff9473-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:01 [async_llm.py:261] Added request cmpl-78aaf88a617140698489da3512ff9473-0.
INFO 03-02 00:52:02 [logger.py:42] Received request cmpl-c53a5e6faba84f35922439cfa5b2a88a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:02 [async_llm.py:261] Added request cmpl-c53a5e6faba84f35922439cfa5b2a88a-0.
INFO 03-02 00:52:03 [logger.py:42] Received request cmpl-07e19a336cf647afb466c35f4b0e90ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:03 [async_llm.py:261] Added request cmpl-07e19a336cf647afb466c35f4b0e90ed-0.
INFO 03-02 00:52:04 [logger.py:42] Received request cmpl-f90e0e5f49b9462eb7d14941ea035992-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:04 [async_llm.py:261] Added request cmpl-f90e0e5f49b9462eb7d14941ea035992-0.
INFO 03-02 00:52:06 [logger.py:42] Received request cmpl-a6467452e4a04b64a58fcff2e19a45ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:06 [async_llm.py:261] Added request cmpl-a6467452e4a04b64a58fcff2e19a45ff-0.
INFO 03-02 00:52:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:52:07 [logger.py:42] Received request cmpl-1cb3a241cad84f04933cf69e3953e7e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:07 [async_llm.py:261] Added request cmpl-1cb3a241cad84f04933cf69e3953e7e6-0.
INFO 03-02 00:52:08 [logger.py:42] Received request cmpl-cb59af70d1b5474b909971af05bcab6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:08 [async_llm.py:261] Added request cmpl-cb59af70d1b5474b909971af05bcab6d-0.
INFO 03-02 00:52:09 [logger.py:42] Received request cmpl-51aff9bcd78d430d998119aa2cea1ae7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:09 [async_llm.py:261] Added request cmpl-51aff9bcd78d430d998119aa2cea1ae7-0.
INFO 03-02 00:52:10 [logger.py:42] Received request cmpl-31a1ea7f61294803ab4787876018ab3e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:10 [async_llm.py:261] Added request cmpl-31a1ea7f61294803ab4787876018ab3e-0.
INFO 03-02 00:52:11 [logger.py:42] Received request cmpl-186c2d2e50c643098c1461d2849a47a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:11 [async_llm.py:261] Added request cmpl-186c2d2e50c643098c1461d2849a47a6-0.
INFO 03-02 00:52:12 [logger.py:42] Received request cmpl-145d0c6dffb74347b7f8e4697105f984-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:12 [async_llm.py:261] Added request cmpl-145d0c6dffb74347b7f8e4697105f984-0.
INFO 03-02 00:52:13 [logger.py:42] Received request cmpl-b20afe414d8247aca90c9f4edb6d803a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:13 [async_llm.py:261] Added request cmpl-b20afe414d8247aca90c9f4edb6d803a-0.
INFO 03-02 00:52:14 [logger.py:42] Received request cmpl-c0ef385a14454d7d80dd5805597c2a7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:14 [async_llm.py:261] Added request cmpl-c0ef385a14454d7d80dd5805597c2a7f-0.
INFO 03-02 00:52:15 [logger.py:42] Received request cmpl-98773dcaf75f4a59a95dffaf872d4ca3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:15 [async_llm.py:261] Added request cmpl-98773dcaf75f4a59a95dffaf872d4ca3-0.
INFO 03-02 00:52:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:52:16 [logger.py:42] Received request cmpl-2dad079e8aa54167a1e76120dac7aa33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:16 [async_llm.py:261] Added request cmpl-2dad079e8aa54167a1e76120dac7aa33-0.
INFO 03-02 00:52:17 [logger.py:42] Received request cmpl-8ec20ef5f32f4d7788259b8e44192fca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:17 [async_llm.py:261] Added request cmpl-8ec20ef5f32f4d7788259b8e44192fca-0.
INFO 03-02 00:52:19 [logger.py:42] Received request cmpl-83a44921ca2246f08124ac1ab4605bd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:19 [async_llm.py:261] Added request cmpl-83a44921ca2246f08124ac1ab4605bd3-0.
INFO 03-02 00:52:20 [logger.py:42] Received request cmpl-3a4c5ea58551452ebc24aa56ade3fae3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:20 [async_llm.py:261] Added request cmpl-3a4c5ea58551452ebc24aa56ade3fae3-0.
INFO 03-02 00:52:21 [logger.py:42] Received request cmpl-35b7bf8878b140d19dbae3b9ec72b8ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:21 [async_llm.py:261] Added request cmpl-35b7bf8878b140d19dbae3b9ec72b8ff-0.
INFO 03-02 00:52:22 [logger.py:42] Received request cmpl-9d3d37c34fb8445a8d6b6d401a2551dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:22 [async_llm.py:261] Added request cmpl-9d3d37c34fb8445a8d6b6d401a2551dd-0.
INFO 03-02 00:52:23 [logger.py:42] Received request cmpl-6a0d5ae96b574fdbba984947d1471f43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:23 [async_llm.py:261] Added request cmpl-6a0d5ae96b574fdbba984947d1471f43-0.
INFO 03-02 00:52:24 [logger.py:42] Received request cmpl-797011ab358948bfa86eaf14595e2487-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:24 [async_llm.py:261] Added request cmpl-797011ab358948bfa86eaf14595e2487-0.
INFO 03-02 00:52:25 [logger.py:42] Received request cmpl-b1ba8a61dadf4b5294ca7c54d14f8ce4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:25 [async_llm.py:261] Added request cmpl-b1ba8a61dadf4b5294ca7c54d14f8ce4-0.
INFO 03-02 00:52:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:52:26 [logger.py:42] Received request cmpl-9e0a458bdcac483b8c74a2fd06d6739e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:26 [async_llm.py:261] Added request cmpl-9e0a458bdcac483b8c74a2fd06d6739e-0.
INFO 03-02 00:52:27 [logger.py:42] Received request cmpl-a38817151088480b9f30e296a06c60c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:27 [async_llm.py:261] Added request cmpl-a38817151088480b9f30e296a06c60c7-0.
INFO 03-02 00:52:28 [logger.py:42] Received request cmpl-1fa10c775c1a44109ad31c6baf3ae66a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:28 [async_llm.py:261] Added request cmpl-1fa10c775c1a44109ad31c6baf3ae66a-0.
INFO 03-02 00:52:29 [logger.py:42] Received request cmpl-93abaa3eaee248a5bd7253a02dc8a1c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:29 [async_llm.py:261] Added request cmpl-93abaa3eaee248a5bd7253a02dc8a1c7-0.
INFO 03-02 00:52:30 [logger.py:42] Received request cmpl-1e3cce958a4b4d3b815b4edfa9eb5fd4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:30 [async_llm.py:261] Added request cmpl-1e3cce958a4b4d3b815b4edfa9eb5fd4-0.
INFO 03-02 00:52:32 [logger.py:42] Received request cmpl-b6a4db1bf0454aa79f36c36be8397a7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:32 [async_llm.py:261] Added request cmpl-b6a4db1bf0454aa79f36c36be8397a7c-0.
INFO 03-02 00:52:33 [logger.py:42] Received request cmpl-32c0ed83d3d847c0a8f4a4965c96ce1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:33 [async_llm.py:261] Added request cmpl-32c0ed83d3d847c0a8f4a4965c96ce1e-0.
INFO 03-02 00:52:34 [logger.py:42] Received request cmpl-3fb66f5d0e0c467d947e6c4635c82b7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:34 [async_llm.py:261] Added request cmpl-3fb66f5d0e0c467d947e6c4635c82b7d-0.
INFO 03-02 00:52:35 [logger.py:42] Received request cmpl-75ca78f814cc49379464a9e2e04418a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:35 [async_llm.py:261] Added request cmpl-75ca78f814cc49379464a9e2e04418a9-0.
INFO 03-02 00:52:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:52:36 [logger.py:42] Received request cmpl-52e9fbac96bd4e3a83d10428ce164c06-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:36 [async_llm.py:261] Added request cmpl-52e9fbac96bd4e3a83d10428ce164c06-0.
INFO 03-02 00:52:37 [logger.py:42] Received request cmpl-70b045d813074db8a74e320763d91754-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:37 [async_llm.py:261] Added request cmpl-70b045d813074db8a74e320763d91754-0.
INFO 03-02 00:52:38 [logger.py:42] Received request cmpl-a27d40e19023402f98c75bc1c9da731a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:38 [async_llm.py:261] Added request cmpl-a27d40e19023402f98c75bc1c9da731a-0.
INFO 03-02 00:52:39 [logger.py:42] Received request cmpl-d67b82f8b3f34bf99a6bedc03a9663b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:39 [async_llm.py:261] Added request cmpl-d67b82f8b3f34bf99a6bedc03a9663b0-0.
INFO 03-02 00:52:40 [logger.py:42] Received request cmpl-d80d2422f00f4cbcb7cb34b24f25f5d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:40 [async_llm.py:261] Added request cmpl-d80d2422f00f4cbcb7cb34b24f25f5d3-0.
INFO 03-02 00:52:41 [logger.py:42] Received request cmpl-707f88155c504d1fa425599020afbd1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:41 [async_llm.py:261] Added request cmpl-707f88155c504d1fa425599020afbd1e-0.
INFO 03-02 00:52:42 [logger.py:42] Received request cmpl-79317854e0c14a6fb6daf0928061be41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:42 [async_llm.py:261] Added request cmpl-79317854e0c14a6fb6daf0928061be41-0.
INFO 03-02 00:52:43 [logger.py:42] Received request cmpl-4ccd7ece1cf5499895fd47dc0a7ad54a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:43 [async_llm.py:261] Added request cmpl-4ccd7ece1cf5499895fd47dc0a7ad54a-0.
INFO 03-02 00:52:45 [logger.py:42] Received request cmpl-a92920b5ab2a4f5fb01c3c7b31f9830a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:45 [async_llm.py:261] Added request cmpl-a92920b5ab2a4f5fb01c3c7b31f9830a-0.
INFO 03-02 00:52:46 [logger.py:42] Received request cmpl-3c1923dc4dd1453197ef71a303b812ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:46 [async_llm.py:261] Added request cmpl-3c1923dc4dd1453197ef71a303b812ba-0.
INFO 03-02 00:52:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:52:47 [logger.py:42] Received request cmpl-98e1cc77c3af47ac989444b788091bec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:47 [async_llm.py:261] Added request cmpl-98e1cc77c3af47ac989444b788091bec-0.
INFO 03-02 00:52:48 [logger.py:42] Received request cmpl-f534376b6d30474da12a4a2d4fe75141-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:48 [async_llm.py:261] Added request cmpl-f534376b6d30474da12a4a2d4fe75141-0.
INFO 03-02 00:52:49 [logger.py:42] Received request cmpl-f58ba0f3bfbf4350ae39e0b9fc595d9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:49 [async_llm.py:261] Added request cmpl-f58ba0f3bfbf4350ae39e0b9fc595d9c-0.
INFO 03-02 00:52:50 [logger.py:42] Received request cmpl-9246a9258001469d8fc1a298caff5325-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:50 [async_llm.py:261] Added request cmpl-9246a9258001469d8fc1a298caff5325-0.
INFO 03-02 00:52:51 [logger.py:42] Received request cmpl-bf08a223159a45b0ae4dc4f6638a8129-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:51 [async_llm.py:261] Added request cmpl-bf08a223159a45b0ae4dc4f6638a8129-0.
INFO 03-02 00:52:52 [logger.py:42] Received request cmpl-63f4794e6e2246bc94a061316ac7989e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:52 [async_llm.py:261] Added request cmpl-63f4794e6e2246bc94a061316ac7989e-0.
INFO 03-02 00:52:53 [logger.py:42] Received request cmpl-03756ba47a234348afc9f31825b786ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:53 [async_llm.py:261] Added request cmpl-03756ba47a234348afc9f31825b786ae-0.
INFO 03-02 00:52:54 [logger.py:42] Received request cmpl-aaef09db186e4fbd8de5371a14b7108e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:54 [async_llm.py:261] Added request cmpl-aaef09db186e4fbd8de5371a14b7108e-0.
INFO 03-02 00:52:55 [logger.py:42] Received request cmpl-ff64f9a907ad4c0aa91927b7db2b56f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:55 [async_llm.py:261] Added request cmpl-ff64f9a907ad4c0aa91927b7db2b56f5-0.
INFO 03-02 00:52:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:52:56 [logger.py:42] Received request cmpl-377af6fbd50840318a35a1cc3de79255-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:57 [async_llm.py:261] Added request cmpl-377af6fbd50840318a35a1cc3de79255-0.
INFO 03-02 00:52:58 [logger.py:42] Received request cmpl-5aa1d9ada2bf40709fdd5325f050248e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:58 [async_llm.py:261] Added request cmpl-5aa1d9ada2bf40709fdd5325f050248e-0.
INFO 03-02 00:52:59 [logger.py:42] Received request cmpl-f48c4e22a9084f1691903f78f62be7ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:59 [async_llm.py:261] Added request cmpl-f48c4e22a9084f1691903f78f62be7ed-0.
INFO 03-02 00:53:00 [logger.py:42] Received request cmpl-e480692689e44b73959409b9239ee266-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:00 [async_llm.py:261] Added request cmpl-e480692689e44b73959409b9239ee266-0.
INFO 03-02 00:53:01 [logger.py:42] Received request cmpl-4d53eb66ca8d4bb68d1b3b9c3c345ec0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:01 [async_llm.py:261] Added request cmpl-4d53eb66ca8d4bb68d1b3b9c3c345ec0-0.
INFO 03-02 00:53:02 [logger.py:42] Received request cmpl-781e25c31c334fe5866ac43d969b7833-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:02 [async_llm.py:261] Added request cmpl-781e25c31c334fe5866ac43d969b7833-0.
INFO 03-02 00:53:03 [logger.py:42] Received request cmpl-b3fee779ed8d48f0a0e193360f6c4590-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:03 [async_llm.py:261] Added request cmpl-b3fee779ed8d48f0a0e193360f6c4590-0.
INFO 03-02 00:53:04 [logger.py:42] Received request cmpl-e0c856d4db454519bfbcebbf44d817cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:04 [async_llm.py:261] Added request cmpl-e0c856d4db454519bfbcebbf44d817cd-0.
INFO 03-02 00:53:05 [logger.py:42] Received request cmpl-c2673d10f1aa42aca7ad418c7a569f77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:05 [async_llm.py:261] Added request cmpl-c2673d10f1aa42aca7ad418c7a569f77-0.
INFO 03-02 00:53:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:53:06 [logger.py:42] Received request cmpl-a629810a763c4440bb37864b1ca32b21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:06 [async_llm.py:261] Added request cmpl-a629810a763c4440bb37864b1ca32b21-0.
INFO 03-02 00:53:07 [logger.py:42] Received request cmpl-40bfc8347cae426cb9d785246b8b2beb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:07 [async_llm.py:261] Added request cmpl-40bfc8347cae426cb9d785246b8b2beb-0.
INFO 03-02 00:53:08 [logger.py:42] Received request cmpl-94b5a082385b416ca21020f81277cd72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:08 [async_llm.py:261] Added request cmpl-94b5a082385b416ca21020f81277cd72-0.
INFO 03-02 00:53:10 [logger.py:42] Received request cmpl-9589bb6a06a9423ca873ea1b783742c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:10 [async_llm.py:261] Added request cmpl-9589bb6a06a9423ca873ea1b783742c3-0.
INFO 03-02 00:53:11 [logger.py:42] Received request cmpl-45c7c62dae1d4489af849c02304b378a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:11 [async_llm.py:261] Added request cmpl-45c7c62dae1d4489af849c02304b378a-0.
INFO 03-02 00:53:12 [logger.py:42] Received request cmpl-2a57a3175f0744b99fb3658f8612144e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:12 [async_llm.py:261] Added request cmpl-2a57a3175f0744b99fb3658f8612144e-0.
INFO 03-02 00:53:13 [logger.py:42] Received request cmpl-9751417da7b84b3faaadc3c65db99045-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:13 [async_llm.py:261] Added request cmpl-9751417da7b84b3faaadc3c65db99045-0.
INFO 03-02 00:53:14 [logger.py:42] Received request cmpl-cc2182405fd44d99a796ca90e6fb4cda-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:14 [async_llm.py:261] Added request cmpl-cc2182405fd44d99a796ca90e6fb4cda-0.
INFO 03-02 00:53:15 [logger.py:42] Received request cmpl-74053555ea4f4bada978aeefb3e072e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:15 [async_llm.py:261] Added request cmpl-74053555ea4f4bada978aeefb3e072e1-0.
INFO 03-02 00:53:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:53:16 [logger.py:42] Received request cmpl-4c0188072ce348f68c2f3a61a1135228-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:16 [async_llm.py:261] Added request cmpl-4c0188072ce348f68c2f3a61a1135228-0.
INFO 03-02 00:53:17 [logger.py:42] Received request cmpl-e44dab75d90f47619ae1fc2abe3882fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:17 [async_llm.py:261] Added request cmpl-e44dab75d90f47619ae1fc2abe3882fd-0.
INFO 03-02 00:53:18 [logger.py:42] Received request cmpl-05789a155f6e47998d90453524643959-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:18 [async_llm.py:261] Added request cmpl-05789a155f6e47998d90453524643959-0.
INFO 03-02 00:53:19 [logger.py:42] Received request cmpl-a6aeeacc2dd44caa99ee8d8640ac4ce3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:19 [async_llm.py:261] Added request cmpl-a6aeeacc2dd44caa99ee8d8640ac4ce3-0.
INFO 03-02 00:53:20 [logger.py:42] Received request cmpl-db993992ca0f4fab9dd448c8cfffb021-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:20 [async_llm.py:261] Added request cmpl-db993992ca0f4fab9dd448c8cfffb021-0.
INFO 03-02 00:53:21 [logger.py:42] Received request cmpl-5b5280853c494a80bbdda241050ada84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:21 [async_llm.py:261] Added request cmpl-5b5280853c494a80bbdda241050ada84-0.
INFO 03-02 00:53:23 [logger.py:42] Received request cmpl-4cccc26ea7764080b8dfb6d9426031e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:23 [async_llm.py:261] Added request cmpl-4cccc26ea7764080b8dfb6d9426031e3-0.
INFO 03-02 00:53:24 [logger.py:42] Received request cmpl-1f4023cc823a44d8b3d455b4b875c607-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:24 [async_llm.py:261] Added request cmpl-1f4023cc823a44d8b3d455b4b875c607-0.
INFO 03-02 00:53:25 [logger.py:42] Received request cmpl-52609e208e50456caaa61341e021ee1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:25 [async_llm.py:261] Added request cmpl-52609e208e50456caaa61341e021ee1c-0.
INFO 03-02 00:53:26 [logger.py:42] Received request cmpl-54670afd318b40478bdc9e9fc3c75b65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:26 [async_llm.py:261] Added request cmpl-54670afd318b40478bdc9e9fc3c75b65-0.
INFO 03-02 00:53:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:53:27 [logger.py:42] Received request cmpl-e6ba100765da4b518c266126208eae8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:27 [async_llm.py:261] Added request cmpl-e6ba100765da4b518c266126208eae8d-0.
INFO 03-02 00:53:28 [logger.py:42] Received request cmpl-d6fae3900af34523a7dcd82af2901849-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:28 [async_llm.py:261] Added request cmpl-d6fae3900af34523a7dcd82af2901849-0.
INFO 03-02 00:53:29 [logger.py:42] Received request cmpl-12926b5231c14cb4b6f7858d5e6a030c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:29 [async_llm.py:261] Added request cmpl-12926b5231c14cb4b6f7858d5e6a030c-0.
INFO 03-02 00:53:30 [logger.py:42] Received request cmpl-5393de464d1b4f45acaf0e409922a6d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:30 [async_llm.py:261] Added request cmpl-5393de464d1b4f45acaf0e409922a6d8-0.
INFO 03-02 00:53:31 [logger.py:42] Received request cmpl-7e1a2bf14f0a4600b6b916648bfef08e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:31 [async_llm.py:261] Added request cmpl-7e1a2bf14f0a4600b6b916648bfef08e-0.
INFO 03-02 00:53:32 [logger.py:42] Received request cmpl-5d4b8c7479534d7e99615802745cfa61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:32 [async_llm.py:261] Added request cmpl-5d4b8c7479534d7e99615802745cfa61-0.
INFO 03-02 00:53:33 [logger.py:42] Received request cmpl-52e6c771a4374b38b0a36a652455ea86-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:33 [async_llm.py:261] Added request cmpl-52e6c771a4374b38b0a36a652455ea86-0.
INFO 03-02 00:53:34 [logger.py:42] Received request cmpl-2c2a9a16c77343aba8d62127ffe5f4df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:34 [async_llm.py:261] Added request cmpl-2c2a9a16c77343aba8d62127ffe5f4df-0.
INFO 03-02 00:53:36 [logger.py:42] Received request cmpl-8c2fb892ddd544628b2476eb2c58192d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:36 [async_llm.py:261] Added request cmpl-8c2fb892ddd544628b2476eb2c58192d-0.
INFO 03-02 00:53:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:53:37 [logger.py:42] Received request cmpl-4bed09f7c1c2456da66a3474c6db717c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:37 [async_llm.py:261] Added request cmpl-4bed09f7c1c2456da66a3474c6db717c-0.
INFO 03-02 00:53:38 [logger.py:42] Received request cmpl-c387c9a246d94a2d996cadd744d26668-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:38 [async_llm.py:261] Added request cmpl-c387c9a246d94a2d996cadd744d26668-0.
INFO 03-02 00:53:39 [logger.py:42] Received request cmpl-f3ae663a40bf4688bc25077a5dd4b17d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:39 [async_llm.py:261] Added request cmpl-f3ae663a40bf4688bc25077a5dd4b17d-0.
INFO 03-02 00:53:40 [logger.py:42] Received request cmpl-027b6984e7fa45668accf18ffbf9f808-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:40 [async_llm.py:261] Added request cmpl-027b6984e7fa45668accf18ffbf9f808-0.
INFO 03-02 00:53:41 [logger.py:42] Received request cmpl-724a039168b64543afebe7d344ff2800-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:41 [async_llm.py:261] Added request cmpl-724a039168b64543afebe7d344ff2800-0.
INFO 03-02 00:53:42 [logger.py:42] Received request cmpl-bd00870b0be04be28267410ce008e021-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:42 [async_llm.py:261] Added request cmpl-bd00870b0be04be28267410ce008e021-0.
INFO 03-02 00:53:43 [logger.py:42] Received request cmpl-f4a479d569064feaa91a02dd73b2a423-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:43 [async_llm.py:261] Added request cmpl-f4a479d569064feaa91a02dd73b2a423-0.
INFO 03-02 00:53:44 [logger.py:42] Received request cmpl-1d92545e1f714bf1a5a4ded06efb4a01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:44 [async_llm.py:261] Added request cmpl-1d92545e1f714bf1a5a4ded06efb4a01-0.
INFO 03-02 00:53:45 [logger.py:42] Received request cmpl-8eb7d72a274845d08481bd580bc2b4f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:45 [async_llm.py:261] Added request cmpl-8eb7d72a274845d08481bd580bc2b4f7-0.
INFO 03-02 00:53:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:53:46 [logger.py:42] Received request cmpl-31ae5ba44e904aa3b00ad213605b7232-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:46 [async_llm.py:261] Added request cmpl-31ae5ba44e904aa3b00ad213605b7232-0.
INFO 03-02 00:53:47 [logger.py:42] Received request cmpl-803869a96046474c8479fa2e62407b2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:47 [async_llm.py:261] Added request cmpl-803869a96046474c8479fa2e62407b2d-0.
INFO 03-02 00:53:49 [logger.py:42] Received request cmpl-cb37d04e7e0f435b892b9ba60cc56273-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:49 [async_llm.py:261] Added request cmpl-cb37d04e7e0f435b892b9ba60cc56273-0.
INFO 03-02 00:53:50 [logger.py:42] Received request cmpl-46b08675d828465893870990fd3fea34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:50 [async_llm.py:261] Added request cmpl-46b08675d828465893870990fd3fea34-0.
INFO 03-02 00:53:51 [logger.py:42] Received request cmpl-387eff33c2184d01a2ccbd07088d63b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:51 [async_llm.py:261] Added request cmpl-387eff33c2184d01a2ccbd07088d63b6-0.
INFO 03-02 00:53:52 [logger.py:42] Received request cmpl-81321d9c879f41cdb9f0e221f0857cb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:52 [async_llm.py:261] Added request cmpl-81321d9c879f41cdb9f0e221f0857cb6-0.
INFO 03-02 00:53:53 [logger.py:42] Received request cmpl-6990b010321247c0bd55457b419ef74c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:53 [async_llm.py:261] Added request cmpl-6990b010321247c0bd55457b419ef74c-0.
INFO 03-02 00:53:54 [logger.py:42] Received request cmpl-15e752bda0cd4e8eaabecd1ffac1a68d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:54 [async_llm.py:261] Added request cmpl-15e752bda0cd4e8eaabecd1ffac1a68d-0.
INFO 03-02 00:53:55 [logger.py:42] Received request cmpl-d8b2b64998f8402ab0c723bc59f584a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:55 [async_llm.py:261] Added request cmpl-d8b2b64998f8402ab0c723bc59f584a2-0.
INFO 03-02 00:53:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:53:56 [logger.py:42] Received request cmpl-bff41be6cdde4ee9b13e860c3b0327a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:56 [async_llm.py:261] Added request cmpl-bff41be6cdde4ee9b13e860c3b0327a2-0.
INFO 03-02 00:53:57 [logger.py:42] Received request cmpl-1fec649d44804476987e5f9959b307d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:57 [async_llm.py:261] Added request cmpl-1fec649d44804476987e5f9959b307d5-0.
INFO 03-02 00:53:58 [logger.py:42] Received request cmpl-7227a56a289a432b8a767492c6c65f1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:58 [async_llm.py:261] Added request cmpl-7227a56a289a432b8a767492c6c65f1b-0.
INFO 03-02 00:53:59 [logger.py:42] Received request cmpl-1cf76feaf0b8405aa18998fdfb481a9a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:59 [async_llm.py:261] Added request cmpl-1cf76feaf0b8405aa18998fdfb481a9a-0.
INFO 03-02 00:54:00 [logger.py:42] Received request cmpl-cf7fd7c915f746f5b18bced02dc2d41b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:00 [async_llm.py:261] Added request cmpl-cf7fd7c915f746f5b18bced02dc2d41b-0.
INFO 03-02 00:54:02 [logger.py:42] Received request cmpl-671e1ffb27fb46d6b39e572db2908c2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:02 [async_llm.py:261] Added request cmpl-671e1ffb27fb46d6b39e572db2908c2e-0.
INFO 03-02 00:54:03 [logger.py:42] Received request cmpl-814e27852244465696317bd3de96b75a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:03 [async_llm.py:261] Added request cmpl-814e27852244465696317bd3de96b75a-0.
INFO 03-02 00:54:04 [logger.py:42] Received request cmpl-880e2820c8f540a48be3f32583a56612-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:04 [async_llm.py:261] Added request cmpl-880e2820c8f540a48be3f32583a56612-0.
INFO 03-02 00:54:05 [logger.py:42] Received request cmpl-b7c748e481c24f258b71ebe69cbd0274-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:05 [async_llm.py:261] Added request cmpl-b7c748e481c24f258b71ebe69cbd0274-0.
INFO 03-02 00:54:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:54:06 [logger.py:42] Received request cmpl-639350703af845f6a797870f1f8f4cd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:06 [async_llm.py:261] Added request cmpl-639350703af845f6a797870f1f8f4cd5-0.
INFO 03-02 00:54:07 [logger.py:42] Received request cmpl-0fab0a4743f348669375efc99705b181-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:07 [async_llm.py:261] Added request cmpl-0fab0a4743f348669375efc99705b181-0.
INFO 03-02 00:54:08 [logger.py:42] Received request cmpl-99f5d3ab69f8429184f6d3bc4ac1bc9a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:08 [async_llm.py:261] Added request cmpl-99f5d3ab69f8429184f6d3bc4ac1bc9a-0.
INFO 03-02 00:54:09 [logger.py:42] Received request cmpl-7464ab1ed5d245418c812cc0faa6b389-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:09 [async_llm.py:261] Added request cmpl-7464ab1ed5d245418c812cc0faa6b389-0.
INFO 03-02 00:54:10 [logger.py:42] Received request cmpl-a7bfa0e648e04655a902a0d6b1cd7d0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:10 [async_llm.py:261] Added request cmpl-a7bfa0e648e04655a902a0d6b1cd7d0a-0.
INFO 03-02 00:54:11 [logger.py:42] Received request cmpl-c4538562583343f5bed583922137e66b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:11 [async_llm.py:261] Added request cmpl-c4538562583343f5bed583922137e66b-0.
INFO 03-02 00:54:12 [logger.py:42] Received request cmpl-0bed18d09cf8449f8a69bc010d8a4d03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:12 [async_llm.py:261] Added request cmpl-0bed18d09cf8449f8a69bc010d8a4d03-0.
INFO 03-02 00:54:13 [logger.py:42] Received request cmpl-043cee344b124118bf042213d20eee07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:13 [async_llm.py:261] Added request cmpl-043cee344b124118bf042213d20eee07-0.
INFO 03-02 00:54:15 [logger.py:42] Received request cmpl-da763ee7cc5d42799c5722765aaa4e53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:15 [async_llm.py:261] Added request cmpl-da763ee7cc5d42799c5722765aaa4e53-0.
INFO 03-02 00:54:16 [logger.py:42] Received request cmpl-8d242830a8eb4453bca792a223526778-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:16 [async_llm.py:261] Added request cmpl-8d242830a8eb4453bca792a223526778-0.
INFO 03-02 00:54:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:54:17 [logger.py:42] Received request cmpl-05f95e4e54cb4f9cabd871bccc16c1e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:17 [async_llm.py:261] Added request cmpl-05f95e4e54cb4f9cabd871bccc16c1e5-0.
INFO 03-02 00:54:18 [logger.py:42] Received request cmpl-be87a4c60cbc44199f20b07d4f377980-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:18 [async_llm.py:261] Added request cmpl-be87a4c60cbc44199f20b07d4f377980-0.
INFO 03-02 00:54:19 [logger.py:42] Received request cmpl-0ed7c5a340194a409eab780c1f342ddd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:19 [async_llm.py:261] Added request cmpl-0ed7c5a340194a409eab780c1f342ddd-0.
INFO 03-02 00:54:20 [logger.py:42] Received request cmpl-fc56f8e8a1084049b5956eb5254c7b0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:20 [async_llm.py:261] Added request cmpl-fc56f8e8a1084049b5956eb5254c7b0a-0.
INFO 03-02 00:54:21 [logger.py:42] Received request cmpl-3572b81f325749bc931f7af494ba1dce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:21 [async_llm.py:261] Added request cmpl-3572b81f325749bc931f7af494ba1dce-0.
INFO 03-02 00:54:22 [logger.py:42] Received request cmpl-f423b03342fc4ed2a9fdc94fc617f3bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:22 [async_llm.py:261] Added request cmpl-f423b03342fc4ed2a9fdc94fc617f3bf-0.
INFO 03-02 00:54:23 [logger.py:42] Received request cmpl-110847b34d2344d784a298532b60a6bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:23 [async_llm.py:261] Added request cmpl-110847b34d2344d784a298532b60a6bd-0.
INFO 03-02 00:54:24 [logger.py:42] Received request cmpl-589c3fedd0204678b2524b2500bd065e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:24 [async_llm.py:261] Added request cmpl-589c3fedd0204678b2524b2500bd065e-0.
INFO 03-02 00:54:25 [logger.py:42] Received request cmpl-8a99ded5419c425eb68ae8f214b3f297-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:25 [async_llm.py:261] Added request cmpl-8a99ded5419c425eb68ae8f214b3f297-0.
INFO 03-02 00:54:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:54:27 [logger.py:42] Received request cmpl-d1397f99e58c4350b5aca6c32b868ab7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:27 [async_llm.py:261] Added request cmpl-d1397f99e58c4350b5aca6c32b868ab7-0.
INFO 03-02 00:54:28 [logger.py:42] Received request cmpl-55f6667fb94d438faae68fc1094d46f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:28 [async_llm.py:261] Added request cmpl-55f6667fb94d438faae68fc1094d46f0-0.
INFO 03-02 00:54:29 [logger.py:42] Received request cmpl-b22ea5aa21e846089a9e7b03cc768d15-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:29 [async_llm.py:261] Added request cmpl-b22ea5aa21e846089a9e7b03cc768d15-0.
INFO 03-02 00:54:30 [logger.py:42] Received request cmpl-14ff1be3489749aca72643245e93d2e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:30 [async_llm.py:261] Added request cmpl-14ff1be3489749aca72643245e93d2e4-0.
INFO 03-02 00:54:31 [logger.py:42] Received request cmpl-1a680f67068a4d2f8eed17f0b76f65b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:31 [async_llm.py:261] Added request cmpl-1a680f67068a4d2f8eed17f0b76f65b7-0.
INFO 03-02 00:54:32 [logger.py:42] Received request cmpl-238e6f767b9f417f9915750a2a733dcc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:32 [async_llm.py:261] Added request cmpl-238e6f767b9f417f9915750a2a733dcc-0.
INFO 03-02 00:54:33 [logger.py:42] Received request cmpl-65cc23c7959340c5bf2275d37abddee0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:33 [async_llm.py:261] Added request cmpl-65cc23c7959340c5bf2275d37abddee0-0.
INFO 03-02 00:54:34 [logger.py:42] Received request cmpl-e10d8c47b78b482aa5835e42603cd489-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:34 [async_llm.py:261] Added request cmpl-e10d8c47b78b482aa5835e42603cd489-0.
INFO 03-02 00:54:35 [logger.py:42] Received request cmpl-8a5270bce3274cf09c7050d5bd64c79b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:35 [async_llm.py:261] Added request cmpl-8a5270bce3274cf09c7050d5bd64c79b-0.
INFO 03-02 00:54:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:54:36 [logger.py:42] Received request cmpl-235d49cdb7494738950f6163eb1e836a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:36 [async_llm.py:261] Added request cmpl-235d49cdb7494738950f6163eb1e836a-0.
INFO 03-02 00:54:37 [logger.py:42] Received request cmpl-356a2eec3351430e81399bddbca33b56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:37 [async_llm.py:261] Added request cmpl-356a2eec3351430e81399bddbca33b56-0.
INFO 03-02 00:54:38 [logger.py:42] Received request cmpl-5e705d8a149d43af958fa08a96762b8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:38 [async_llm.py:261] Added request cmpl-5e705d8a149d43af958fa08a96762b8e-0.
INFO 03-02 00:54:40 [logger.py:42] Received request cmpl-0019743a1ed04fbc9d0400be9a35b91d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:40 [async_llm.py:261] Added request cmpl-0019743a1ed04fbc9d0400be9a35b91d-0.
INFO 03-02 00:54:41 [logger.py:42] Received request cmpl-3e67c40c3b514e468c0a2e9d68b6d0c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:41 [async_llm.py:261] Added request cmpl-3e67c40c3b514e468c0a2e9d68b6d0c5-0.
INFO 03-02 00:54:42 [logger.py:42] Received request cmpl-9c092baebc114745acba46e0eabc47dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:42 [async_llm.py:261] Added request cmpl-9c092baebc114745acba46e0eabc47dd-0.
INFO 03-02 00:54:43 [logger.py:42] Received request cmpl-a215d35cac024cffb328cce6357f9b51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:43 [async_llm.py:261] Added request cmpl-a215d35cac024cffb328cce6357f9b51-0.
INFO 03-02 00:54:44 [logger.py:42] Received request cmpl-5c176bb830d9433ab0a2748aca284890-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:44 [async_llm.py:261] Added request cmpl-5c176bb830d9433ab0a2748aca284890-0.
INFO 03-02 00:54:45 [logger.py:42] Received request cmpl-41437b0a9a2b4c1798ec380624eb7dd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:45 [async_llm.py:261] Added request cmpl-41437b0a9a2b4c1798ec380624eb7dd5-0.
INFO 03-02 00:54:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:54:46 [logger.py:42] Received request cmpl-8ea597a6e3134ea0a69af84900ff1b5c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:46 [async_llm.py:261] Added request cmpl-8ea597a6e3134ea0a69af84900ff1b5c-0.
INFO 03-02 00:54:47 [logger.py:42] Received request cmpl-bc1a85d481bb43c08a66b53c9f5176dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:47 [async_llm.py:261] Added request cmpl-bc1a85d481bb43c08a66b53c9f5176dd-0.
INFO 03-02 00:54:48 [logger.py:42] Received request cmpl-9f13e74f59b8492ab03584b7479ed4a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:48 [async_llm.py:261] Added request cmpl-9f13e74f59b8492ab03584b7479ed4a2-0.
INFO 03-02 00:54:49 [logger.py:42] Received request cmpl-5294dd43fc704e0981e5186f9b7a68a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:49 [async_llm.py:261] Added request cmpl-5294dd43fc704e0981e5186f9b7a68a6-0.
INFO 03-02 00:54:50 [logger.py:42] Received request cmpl-ce338f02a5e24281b22de8a9c597da73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:50 [async_llm.py:261] Added request cmpl-ce338f02a5e24281b22de8a9c597da73-0.
INFO 03-02 00:54:51 [logger.py:42] Received request cmpl-0cd84ca3fa1d411bb9bc062599b27ecb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:51 [async_llm.py:261] Added request cmpl-0cd84ca3fa1d411bb9bc062599b27ecb-0.
INFO 03-02 00:54:53 [logger.py:42] Received request cmpl-6fbfb8849c524ee78fa02863eadd3d8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:53 [async_llm.py:261] Added request cmpl-6fbfb8849c524ee78fa02863eadd3d8c-0.
INFO 03-02 00:54:54 [logger.py:42] Received request cmpl-efd128e11c0d4ae0b0bbdcaeb5e707a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:54 [async_llm.py:261] Added request cmpl-efd128e11c0d4ae0b0bbdcaeb5e707a9-0.
INFO 03-02 00:54:55 [logger.py:42] Received request cmpl-eb77daba94774c589bc5a4de82e86daf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:55 [async_llm.py:261] Added request cmpl-eb77daba94774c589bc5a4de82e86daf-0.
INFO 03-02 00:54:56 [logger.py:42] Received request cmpl-c0b540ddfae7469dab61afe8696d4a19-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:56 [async_llm.py:261] Added request cmpl-c0b540ddfae7469dab61afe8696d4a19-0.
INFO 03-02 00:54:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:54:57 [logger.py:42] Received request cmpl-380fc09e81cf461c91554d3dba66f601-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:57 [async_llm.py:261] Added request cmpl-380fc09e81cf461c91554d3dba66f601-0.
INFO 03-02 00:54:58 [logger.py:42] Received request cmpl-69ff4ea4124742c981b530c03905a448-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:58 [async_llm.py:261] Added request cmpl-69ff4ea4124742c981b530c03905a448-0.
INFO 03-02 00:54:59 [logger.py:42] Received request cmpl-1691ebe0dba44c2bb20e49a4a50070bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:59 [async_llm.py:261] Added request cmpl-1691ebe0dba44c2bb20e49a4a50070bc-0.
INFO 03-02 00:55:00 [logger.py:42] Received request cmpl-d55cebb515c040c68fdaee34b4683c9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:00 [async_llm.py:261] Added request cmpl-d55cebb515c040c68fdaee34b4683c9e-0.
INFO 03-02 00:55:01 [logger.py:42] Received request cmpl-665024cbd2c64e90a38bb4e6bf869001-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:01 [async_llm.py:261] Added request cmpl-665024cbd2c64e90a38bb4e6bf869001-0.
INFO 03-02 00:55:02 [logger.py:42] Received request cmpl-fead5028a4eb48b9a76d0ec368877e0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:02 [async_llm.py:261] Added request cmpl-fead5028a4eb48b9a76d0ec368877e0e-0.
INFO 03-02 00:55:03 [logger.py:42] Received request cmpl-59ad634dd48e41fd98b2eff029cb477a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:03 [async_llm.py:261] Added request cmpl-59ad634dd48e41fd98b2eff029cb477a-0.
INFO 03-02 00:55:04 [logger.py:42] Received request cmpl-ecf2c69fcb4d485082f724fc9d5a6d0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:04 [async_llm.py:261] Added request cmpl-ecf2c69fcb4d485082f724fc9d5a6d0c-0.
INFO 03-02 00:55:06 [logger.py:42] Received request cmpl-33832324eb93483d81fb0546fcf3eb0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:06 [async_llm.py:261] Added request cmpl-33832324eb93483d81fb0546fcf3eb0b-0.
INFO 03-02 00:55:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:55:07 [logger.py:42] Received request cmpl-83a591d29f594d3f8f168d92f0db9990-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:07 [async_llm.py:261] Added request cmpl-83a591d29f594d3f8f168d92f0db9990-0.
INFO 03-02 00:55:08 [logger.py:42] Received request cmpl-8df1a923aacd4313baf6490b96956111-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:08 [async_llm.py:261] Added request cmpl-8df1a923aacd4313baf6490b96956111-0.
INFO 03-02 00:55:09 [logger.py:42] Received request cmpl-0a95f9ab95904e4284e19d35b06973f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:09 [async_llm.py:261] Added request cmpl-0a95f9ab95904e4284e19d35b06973f1-0.
INFO 03-02 00:55:10 [logger.py:42] Received request cmpl-3aa785dad32549d389157156bc9c5a8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:10 [async_llm.py:261] Added request cmpl-3aa785dad32549d389157156bc9c5a8f-0.
INFO 03-02 00:55:11 [logger.py:42] Received request cmpl-b70d0167943a4c91bafc181513c43b94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:11 [async_llm.py:261] Added request cmpl-b70d0167943a4c91bafc181513c43b94-0.
INFO 03-02 00:55:12 [logger.py:42] Received request cmpl-48862ccfa66f4b629d97a7e6b9e0b7db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:12 [async_llm.py:261] Added request cmpl-48862ccfa66f4b629d97a7e6b9e0b7db-0.
INFO 03-02 00:55:13 [logger.py:42] Received request cmpl-fcc61a4d40dd41b2861b5bdd9111cf00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:13 [async_llm.py:261] Added request cmpl-fcc61a4d40dd41b2861b5bdd9111cf00-0.
INFO 03-02 00:55:14 [logger.py:42] Received request cmpl-51bce88409fe479e83eb02d6720c159c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:14 [async_llm.py:261] Added request cmpl-51bce88409fe479e83eb02d6720c159c-0.
INFO 03-02 00:55:15 [logger.py:42] Received request cmpl-43f7c7f3a9754ccc80fcc40d5d4b249b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:15 [async_llm.py:261] Added request cmpl-43f7c7f3a9754ccc80fcc40d5d4b249b-0.
INFO 03-02 00:55:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:55:16 [logger.py:42] Received request cmpl-683eea1fcce749c9ba654808d441b641-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:16 [async_llm.py:261] Added request cmpl-683eea1fcce749c9ba654808d441b641-0.
INFO 03-02 00:55:17 [logger.py:42] Received request cmpl-43a1741bd7b2417a930dc67f09e5125f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:17 [async_llm.py:261] Added request cmpl-43a1741bd7b2417a930dc67f09e5125f-0.
INFO 03-02 00:55:19 [logger.py:42] Received request cmpl-62597ab4beaa418d8799c060d3b3928e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:19 [async_llm.py:261] Added request cmpl-62597ab4beaa418d8799c060d3b3928e-0.
INFO 03-02 00:55:20 [logger.py:42] Received request cmpl-0d38a2cdbbff404d8f200e4f03163e53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:20 [async_llm.py:261] Added request cmpl-0d38a2cdbbff404d8f200e4f03163e53-0.
INFO 03-02 00:55:21 [logger.py:42] Received request cmpl-ff300defa9e8416e919483c8bdf8a9b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:21 [async_llm.py:261] Added request cmpl-ff300defa9e8416e919483c8bdf8a9b4-0.
INFO 03-02 00:55:22 [logger.py:42] Received request cmpl-96b4b14f3b074924a2b9bbaad3f630b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:22 [async_llm.py:261] Added request cmpl-96b4b14f3b074924a2b9bbaad3f630b9-0.
INFO 03-02 00:55:23 [logger.py:42] Received request cmpl-933867e315434f27b6621853b36318f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:23 [async_llm.py:261] Added request cmpl-933867e315434f27b6621853b36318f0-0.
INFO 03-02 00:55:24 [logger.py:42] Received request cmpl-dd3df5e9e1bb4b93a33b3b998cfbe632-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:24 [async_llm.py:261] Added request cmpl-dd3df5e9e1bb4b93a33b3b998cfbe632-0.
INFO 03-02 00:55:25 [logger.py:42] Received request cmpl-d50745299aa54f27b674797dfc743a76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:25 [async_llm.py:261] Added request cmpl-d50745299aa54f27b674797dfc743a76-0.
INFO 03-02 00:55:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:55:26 [logger.py:42] Received request cmpl-5e37131e9bc641c2ae9ac8e350d0290b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:26 [async_llm.py:261] Added request cmpl-5e37131e9bc641c2ae9ac8e350d0290b-0.
INFO 03-02 00:55:27 [logger.py:42] Received request cmpl-53dd58e401d641938a0264060208dd4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:27 [async_llm.py:261] Added request cmpl-53dd58e401d641938a0264060208dd4f-0.
INFO 03-02 00:55:28 [logger.py:42] Received request cmpl-7b64a277b9e341579343e9e3c5f7a13a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:28 [async_llm.py:261] Added request cmpl-7b64a277b9e341579343e9e3c5f7a13a-0.
INFO 03-02 00:55:29 [logger.py:42] Received request cmpl-b995bfae671f44f597e39d1f313261ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:29 [async_llm.py:261] Added request cmpl-b995bfae671f44f597e39d1f313261ac-0.
INFO 03-02 00:55:30 [logger.py:42] Received request cmpl-727303e21e1d4f5f8accf759223cf34c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:30 [async_llm.py:261] Added request cmpl-727303e21e1d4f5f8accf759223cf34c-0.
INFO 03-02 00:55:32 [logger.py:42] Received request cmpl-c90edeeaf54f4ba5843fc284e6245247-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:32 [async_llm.py:261] Added request cmpl-c90edeeaf54f4ba5843fc284e6245247-0.
INFO 03-02 00:55:33 [logger.py:42] Received request cmpl-857d5a74978449d4aa5ee55e1138b6e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:33 [async_llm.py:261] Added request cmpl-857d5a74978449d4aa5ee55e1138b6e1-0.
INFO 03-02 00:55:34 [logger.py:42] Received request cmpl-658f4ef5a567406d91ccbf95b3161a50-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:34 [async_llm.py:261] Added request cmpl-658f4ef5a567406d91ccbf95b3161a50-0.
INFO 03-02 00:55:35 [logger.py:42] Received request cmpl-c6d9a3b287be48f0a6b1e58c9cd2ee95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:35 [async_llm.py:261] Added request cmpl-c6d9a3b287be48f0a6b1e58c9cd2ee95-0.
INFO 03-02 00:55:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:55:36 [logger.py:42] Received request cmpl-241d08895ee0486d8618d7af54a50de1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:36 [async_llm.py:261] Added request cmpl-241d08895ee0486d8618d7af54a50de1-0.
INFO 03-02 00:55:37 [logger.py:42] Received request cmpl-0251cd1d818445388c6f954eefb21579-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:37 [async_llm.py:261] Added request cmpl-0251cd1d818445388c6f954eefb21579-0.
INFO 03-02 00:55:38 [logger.py:42] Received request cmpl-bf7aa0f2585c4c21912ca73e3c1cce63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:38 [async_llm.py:261] Added request cmpl-bf7aa0f2585c4c21912ca73e3c1cce63-0.
INFO 03-02 00:55:39 [logger.py:42] Received request cmpl-7e5400d5a9434bce953473f991e462f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:39 [async_llm.py:261] Added request cmpl-7e5400d5a9434bce953473f991e462f9-0.
INFO 03-02 00:55:40 [logger.py:42] Received request cmpl-1f9374e075af4b31a5b1843defceca62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:40 [async_llm.py:261] Added request cmpl-1f9374e075af4b31a5b1843defceca62-0.
INFO 03-02 00:55:41 [logger.py:42] Received request cmpl-b2b744b7521e449f824bcb5bd25087b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:41 [async_llm.py:261] Added request cmpl-b2b744b7521e449f824bcb5bd25087b6-0.
INFO 03-02 00:55:42 [logger.py:42] Received request cmpl-3891c7e6ab724745bdb4d6cf687e8ac6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:42 [async_llm.py:261] Added request cmpl-3891c7e6ab724745bdb4d6cf687e8ac6-0.
INFO 03-02 00:55:43 [logger.py:42] Received request cmpl-71def052672e4cbaaf9522562f2f775f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:43 [async_llm.py:261] Added request cmpl-71def052672e4cbaaf9522562f2f775f-0.
INFO 03-02 00:55:45 [logger.py:42] Received request cmpl-3b8805413e1d413fba30a8efb5d0df0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:45 [async_llm.py:261] Added request cmpl-3b8805413e1d413fba30a8efb5d0df0a-0.
INFO 03-02 00:55:46 [logger.py:42] Received request cmpl-8a5d7b698cbd4e6eb747c3f31b6829d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:46 [async_llm.py:261] Added request cmpl-8a5d7b698cbd4e6eb747c3f31b6829d7-0.
INFO 03-02 00:55:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:55:47 [logger.py:42] Received request cmpl-7e6c26ef99f345d7a92910fa9fbcfa3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:47 [async_llm.py:261] Added request cmpl-7e6c26ef99f345d7a92910fa9fbcfa3d-0.
INFO 03-02 00:55:48 [logger.py:42] Received request cmpl-f788d7ab7cb34f349025c55af3bcf609-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:48 [async_llm.py:261] Added request cmpl-f788d7ab7cb34f349025c55af3bcf609-0.
INFO 03-02 00:55:49 [logger.py:42] Received request cmpl-331fc0d6ef794881aedfa3e92bc9debf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:49 [async_llm.py:261] Added request cmpl-331fc0d6ef794881aedfa3e92bc9debf-0.
INFO 03-02 00:55:50 [logger.py:42] Received request cmpl-51d6193846154f8a9641b7a354b3e312-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:50 [async_llm.py:261] Added request cmpl-51d6193846154f8a9641b7a354b3e312-0.
INFO 03-02 00:55:51 [logger.py:42] Received request cmpl-31d4c67a1e7c425aad494d1d288b3f34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:51 [async_llm.py:261] Added request cmpl-31d4c67a1e7c425aad494d1d288b3f34-0.
INFO 03-02 00:55:52 [logger.py:42] Received request cmpl-37794e93368d445c8ec0e010644f9e5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:52 [async_llm.py:261] Added request cmpl-37794e93368d445c8ec0e010644f9e5b-0.
INFO 03-02 00:55:53 [logger.py:42] Received request cmpl-0e43617180f147a5ba3af12cad770d1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:53 [async_llm.py:261] Added request cmpl-0e43617180f147a5ba3af12cad770d1b-0.
INFO 03-02 00:55:54 [logger.py:42] Received request cmpl-350cdf3c335b44eb8529e56ce8496a6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:54 [async_llm.py:261] Added request cmpl-350cdf3c335b44eb8529e56ce8496a6f-0.
INFO 03-02 00:55:55 [logger.py:42] Received request cmpl-5fa12cf0ee7c433aa3257afa2d8a3c6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:55 [async_llm.py:261] Added request cmpl-5fa12cf0ee7c433aa3257afa2d8a3c6d-0.
INFO 03-02 00:55:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:55:57 [logger.py:42] Received request cmpl-9a31fe29cc6f4f828e90513c6a9850d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:57 [async_llm.py:261] Added request cmpl-9a31fe29cc6f4f828e90513c6a9850d1-0.
INFO 03-02 00:55:58 [logger.py:42] Received request cmpl-592abe87b4884abebe4ed9ddab97d275-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:58 [async_llm.py:261] Added request cmpl-592abe87b4884abebe4ed9ddab97d275-0.
INFO 03-02 00:55:59 [logger.py:42] Received request cmpl-432e71bf98504b41b4f01ef6b7002559-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:59 [async_llm.py:261] Added request cmpl-432e71bf98504b41b4f01ef6b7002559-0.
INFO 03-02 00:56:00 [logger.py:42] Received request cmpl-77f84368a7e84be7bf5563ecce25e933-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:00 [async_llm.py:261] Added request cmpl-77f84368a7e84be7bf5563ecce25e933-0.
INFO 03-02 00:56:01 [logger.py:42] Received request cmpl-5dd5993f3a194bd394654329d0e0d78a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:01 [async_llm.py:261] Added request cmpl-5dd5993f3a194bd394654329d0e0d78a-0.
INFO 03-02 00:56:02 [logger.py:42] Received request cmpl-651c477221854af484d693f4ff928751-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:02 [async_llm.py:261] Added request cmpl-651c477221854af484d693f4ff928751-0.
INFO 03-02 00:56:03 [logger.py:42] Received request cmpl-7e89ba366d9748359c56a2428ae095bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:03 [async_llm.py:261] Added request cmpl-7e89ba366d9748359c56a2428ae095bf-0.
INFO 03-02 00:56:04 [logger.py:42] Received request cmpl-ae183891654549e2a2522616f1a71f39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:04 [async_llm.py:261] Added request cmpl-ae183891654549e2a2522616f1a71f39-0.
INFO 03-02 00:56:05 [logger.py:42] Received request cmpl-909383ef95324d31830ef851d7683a2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:05 [async_llm.py:261] Added request cmpl-909383ef95324d31830ef851d7683a2a-0.
INFO 03-02 00:56:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:56:06 [logger.py:42] Received request cmpl-0974cbd059b24d258fd84df2c6e88ca3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:06 [async_llm.py:261] Added request cmpl-0974cbd059b24d258fd84df2c6e88ca3-0.
INFO 03-02 00:56:07 [logger.py:42] Received request cmpl-01ec656646884062b53a80bc075e6433-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:07 [async_llm.py:261] Added request cmpl-01ec656646884062b53a80bc075e6433-0.
INFO 03-02 00:56:08 [logger.py:42] Received request cmpl-07b144d3f48d436392d38abfd2b19e90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:08 [async_llm.py:261] Added request cmpl-07b144d3f48d436392d38abfd2b19e90-0.
INFO 03-02 00:56:10 [logger.py:42] Received request cmpl-e67f5656b89a471995d2b9c9f5f0e4fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:10 [async_llm.py:261] Added request cmpl-e67f5656b89a471995d2b9c9f5f0e4fc-0.
INFO 03-02 00:56:11 [logger.py:42] Received request cmpl-980465c332d441619165cdd7f6bd4039-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:11 [async_llm.py:261] Added request cmpl-980465c332d441619165cdd7f6bd4039-0.
INFO 03-02 00:56:12 [logger.py:42] Received request cmpl-8b99600c02f8439cb37c911b8b1c26b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:12 [async_llm.py:261] Added request cmpl-8b99600c02f8439cb37c911b8b1c26b1-0.
INFO 03-02 00:56:13 [logger.py:42] Received request cmpl-3324dcdd3aba49e0948c72506c40d63b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:13 [async_llm.py:261] Added request cmpl-3324dcdd3aba49e0948c72506c40d63b-0.
INFO 03-02 00:56:14 [logger.py:42] Received request cmpl-77d7198fff794ce190448de701669f32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:14 [async_llm.py:261] Added request cmpl-77d7198fff794ce190448de701669f32-0.
INFO 03-02 00:56:15 [logger.py:42] Received request cmpl-0330046d01664dc1b177747910df64dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:15 [async_llm.py:261] Added request cmpl-0330046d01664dc1b177747910df64dc-0.
INFO 03-02 00:56:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:56:16 [logger.py:42] Received request cmpl-ce77392f02674a979649f6c3297464c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:16 [async_llm.py:261] Added request cmpl-ce77392f02674a979649f6c3297464c4-0.
INFO 03-02 00:56:17 [logger.py:42] Received request cmpl-40aec914732f4852a7fadaaf401344ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:17 [async_llm.py:261] Added request cmpl-40aec914732f4852a7fadaaf401344ea-0.
INFO 03-02 00:56:18 [logger.py:42] Received request cmpl-85d5d34ec0b74437b8bf09a4ac3c2b21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:18 [async_llm.py:261] Added request cmpl-85d5d34ec0b74437b8bf09a4ac3c2b21-0.
INFO 03-02 00:56:19 [logger.py:42] Received request cmpl-8df0da4dd27641f3a73c9f969b55df0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:19 [async_llm.py:261] Added request cmpl-8df0da4dd27641f3a73c9f969b55df0b-0.
INFO 03-02 00:56:20 [logger.py:42] Received request cmpl-9e2aebb8203548f3b147e30c8d21092f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:20 [async_llm.py:261] Added request cmpl-9e2aebb8203548f3b147e30c8d21092f-0.
INFO 03-02 00:56:21 [logger.py:42] Received request cmpl-29690a7f82ec4a909fc79b761abac126-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:21 [async_llm.py:261] Added request cmpl-29690a7f82ec4a909fc79b761abac126-0.
INFO 03-02 00:56:23 [logger.py:42] Received request cmpl-61cf699d93864897b8c03e9cffbd60b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:23 [async_llm.py:261] Added request cmpl-61cf699d93864897b8c03e9cffbd60b8-0.
INFO 03-02 00:56:24 [logger.py:42] Received request cmpl-7dd75b2cd5234b4e9784ef7f0711e4ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:24 [async_llm.py:261] Added request cmpl-7dd75b2cd5234b4e9784ef7f0711e4ed-0.
INFO 03-02 00:56:25 [logger.py:42] Received request cmpl-1c14557445514af3b632866a00de0b4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:25 [async_llm.py:261] Added request cmpl-1c14557445514af3b632866a00de0b4b-0.
INFO 03-02 00:56:26 [logger.py:42] Received request cmpl-625896bb1e294e0ba9081a86da88c06b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:26 [async_llm.py:261] Added request cmpl-625896bb1e294e0ba9081a86da88c06b-0.
INFO 03-02 00:56:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 00:56:27 [logger.py:42] Received request cmpl-9e767cc793ca48428cfcb4f188915000-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:27 [async_llm.py:261] Added request cmpl-9e767cc793ca48428cfcb4f188915000-0.
INFO 03-02 00:56:28 [logger.py:42] Received request cmpl-1a0796226e794bb39bb262d6e8f7a6d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:28 [async_llm.py:261] Added request cmpl-1a0796226e794bb39bb262d6e8f7a6d9-0.
INFO 03-02 00:56:29 [logger.py:42] Received request cmpl-b1b54146b4b8419c8cd7f66d9bdbe471-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:29 [async_llm.py:261] Added request cmpl-b1b54146b4b8419c8cd7f66d9bdbe471-0.
INFO 03-02 00:56:30 [logger.py:42] Received request cmpl-ef9f5cefa7484ddab5ffbf86ca0190b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:30 [async_llm.py:261] Added request cmpl-ef9f5cefa7484ddab5ffbf86ca0190b7-0.
INFO 03-02 00:56:31 [logger.py:42] Received request cmpl-0b6fd90e6c5f40768e9fecf599c3aa84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:31 [async_llm.py:261] Added request cmpl-0b6fd90e6c5f40768e9fecf599c3aa84-0.
INFO 03-02 00:56:32 [logger.py:42] Received request cmpl-3251d51982444b508a41b694fb81cb3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:32 [async_llm.py:261] Added request cmpl-3251d51982444b508a41b694fb81cb3b-0.
INFO 03-02 00:56:33 [logger.py:42] Received request cmpl-d3c9af0fbd5d457b939d41f70702be3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:33 [async_llm.py:261] Added request cmpl-d3c9af0fbd5d457b939d41f70702be3d-0.
INFO 03-02 00:56:34 [logger.py:42] Received request cmpl-e428a96f79444d1da9542fb70b74eae7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:34 [async_llm.py:261] Added request cmpl-e428a96f79444d1da9542fb70b74eae7-0.
INFO 03-02 00:56:36 [logger.py:42] Received request cmpl-d7855a76f2944543bedef756f25e41b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:36 [async_llm.py:261] Added request cmpl-d7855a76f2944543bedef756f25e41b6-0.
INFO 03-02 00:56:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:56:37 [logger.py:42] Received request cmpl-03f592a8c59e4fda80ff3c0656bf6c8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:37 [async_llm.py:261] Added request cmpl-03f592a8c59e4fda80ff3c0656bf6c8c-0.
INFO 03-02 00:56:38 [logger.py:42] Received request cmpl-8dab2f8be8844fcaa81f55c6b0451b8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:38 [async_llm.py:261] Added request cmpl-8dab2f8be8844fcaa81f55c6b0451b8d-0.
INFO 03-02 00:56:39 [logger.py:42] Received request cmpl-99b3801f140d460d835b117859236b05-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:39 [async_llm.py:261] Added request cmpl-99b3801f140d460d835b117859236b05-0.
INFO 03-02 00:56:40 [logger.py:42] Received request cmpl-a73ae2c17d254fd19eec8d9773fbe2ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:40 [async_llm.py:261] Added request cmpl-a73ae2c17d254fd19eec8d9773fbe2ec-0.
INFO 03-02 00:56:41 [logger.py:42] Received request cmpl-50dd80e8623749d18bee6aabb9c62105-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:41 [async_llm.py:261] Added request cmpl-50dd80e8623749d18bee6aabb9c62105-0.
INFO 03-02 00:56:42 [logger.py:42] Received request cmpl-8acd83cd0d5e4450a2c482a5b354e9a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:42 [async_llm.py:261] Added request cmpl-8acd83cd0d5e4450a2c482a5b354e9a8-0.
INFO 03-02 00:56:43 [logger.py:42] Received request cmpl-e8f704f0eea84adaab9b9b6ac084a057-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:43 [async_llm.py:261] Added request cmpl-e8f704f0eea84adaab9b9b6ac084a057-0.
INFO 03-02 00:56:44 [logger.py:42] Received request cmpl-4b190203149b4db0810c710d644bd541-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:44 [async_llm.py:261] Added request cmpl-4b190203149b4db0810c710d644bd541-0.
INFO 03-02 00:56:45 [logger.py:42] Received request cmpl-0b9730d1e66b46a693b8735cfd2ddf5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:45 [async_llm.py:261] Added request cmpl-0b9730d1e66b46a693b8735cfd2ddf5f-0.
INFO 03-02 00:56:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:56:46 [logger.py:42] Received request cmpl-b1a303b5c8ad46d88f0538e23d58916d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:46 [async_llm.py:261] Added request cmpl-b1a303b5c8ad46d88f0538e23d58916d-0.
INFO 03-02 00:56:47 [logger.py:42] Received request cmpl-3c362fe2a1d44ddcbad0506deb87d4f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:47 [async_llm.py:261] Added request cmpl-3c362fe2a1d44ddcbad0506deb87d4f1-0.
INFO 03-02 00:56:49 [logger.py:42] Received request cmpl-d5c8e9c0598b4e0bafc9228fdd4aca52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:49 [async_llm.py:261] Added request cmpl-d5c8e9c0598b4e0bafc9228fdd4aca52-0.
INFO 03-02 00:56:50 [logger.py:42] Received request cmpl-0be88208fd16474088db3f7671185bca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:50 [async_llm.py:261] Added request cmpl-0be88208fd16474088db3f7671185bca-0.
INFO 03-02 00:56:51 [logger.py:42] Received request cmpl-41452e2ff9914ccbbdf582a17d68f478-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:51 [async_llm.py:261] Added request cmpl-41452e2ff9914ccbbdf582a17d68f478-0.
INFO 03-02 00:56:52 [logger.py:42] Received request cmpl-89d03160d621429bbc996ce0d6ee9d14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:52 [async_llm.py:261] Added request cmpl-89d03160d621429bbc996ce0d6ee9d14-0.
INFO 03-02 00:56:53 [logger.py:42] Received request cmpl-9a9cf80399ee40448c7ffeda024c17cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:53 [async_llm.py:261] Added request cmpl-9a9cf80399ee40448c7ffeda024c17cb-0.
INFO 03-02 00:56:54 [logger.py:42] Received request cmpl-714cac9d4b2d43aab30f654ebe5dec42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:54 [async_llm.py:261] Added request cmpl-714cac9d4b2d43aab30f654ebe5dec42-0.
INFO 03-02 00:56:55 [logger.py:42] Received request cmpl-c47b06eab0374dc2b0d5e9a31d9d0c21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:55 [async_llm.py:261] Added request cmpl-c47b06eab0374dc2b0d5e9a31d9d0c21-0.
INFO 03-02 00:56:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:56:56 [logger.py:42] Received request cmpl-108df0d35f654133a800930140b35821-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:56 [async_llm.py:261] Added request cmpl-108df0d35f654133a800930140b35821-0.
INFO 03-02 00:56:57 [logger.py:42] Received request cmpl-2bfb4fa3053746149c9c84a904f4ec4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:57 [async_llm.py:261] Added request cmpl-2bfb4fa3053746149c9c84a904f4ec4f-0.
INFO 03-02 00:56:58 [logger.py:42] Received request cmpl-84006806c0b143c1b1eab5b1b3717d5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:58 [async_llm.py:261] Added request cmpl-84006806c0b143c1b1eab5b1b3717d5e-0.
INFO 03-02 00:56:59 [logger.py:42] Received request cmpl-10673ac6b01444eba221ecfb1b9e454d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:59 [async_llm.py:261] Added request cmpl-10673ac6b01444eba221ecfb1b9e454d-0.
INFO 03-02 00:57:00 [logger.py:42] Received request cmpl-37334c0e7a6048fb94abb7ad46e375b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:00 [async_llm.py:261] Added request cmpl-37334c0e7a6048fb94abb7ad46e375b9-0.
INFO 03-02 00:57:02 [logger.py:42] Received request cmpl-ef9e3f08626545218a5ab10f52ab373a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:02 [async_llm.py:261] Added request cmpl-ef9e3f08626545218a5ab10f52ab373a-0.
INFO 03-02 00:57:03 [logger.py:42] Received request cmpl-999489e51b5642d4837299fa73f1ba5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:03 [async_llm.py:261] Added request cmpl-999489e51b5642d4837299fa73f1ba5e-0.
INFO 03-02 00:57:04 [logger.py:42] Received request cmpl-31f1409897ce4853a92f9042b3276152-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:04 [async_llm.py:261] Added request cmpl-31f1409897ce4853a92f9042b3276152-0.
INFO 03-02 00:57:05 [logger.py:42] Received request cmpl-f2866f1f13e347e2bf6079756d21c193-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:05 [async_llm.py:261] Added request cmpl-f2866f1f13e347e2bf6079756d21c193-0.
INFO 03-02 00:57:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:57:06 [logger.py:42] Received request cmpl-00662fd870954753ac3cfe9814cc2536-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:06 [async_llm.py:261] Added request cmpl-00662fd870954753ac3cfe9814cc2536-0.
INFO 03-02 00:57:07 [logger.py:42] Received request cmpl-c74ab624f0d94a82bfcf0d0ee19a8995-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:07 [async_llm.py:261] Added request cmpl-c74ab624f0d94a82bfcf0d0ee19a8995-0.
INFO 03-02 00:57:08 [logger.py:42] Received request cmpl-47a256e95f044ef7a478b4cfa727ebd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:08 [async_llm.py:261] Added request cmpl-47a256e95f044ef7a478b4cfa727ebd1-0.
INFO 03-02 00:57:09 [logger.py:42] Received request cmpl-46d5dab29e3348389fd6493e9da53d22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:09 [async_llm.py:261] Added request cmpl-46d5dab29e3348389fd6493e9da53d22-0.
INFO 03-02 00:57:10 [logger.py:42] Received request cmpl-1d3fad056fe4432e95b962398d496ab4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:10 [async_llm.py:261] Added request cmpl-1d3fad056fe4432e95b962398d496ab4-0.
INFO 03-02 00:57:11 [logger.py:42] Received request cmpl-97da60e9c2e94b7aab3cfcae3d45c6dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:11 [async_llm.py:261] Added request cmpl-97da60e9c2e94b7aab3cfcae3d45c6dd-0.
INFO 03-02 00:57:12 [logger.py:42] Received request cmpl-335b292cd2da4f179620257fd9cc8be5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:12 [async_llm.py:261] Added request cmpl-335b292cd2da4f179620257fd9cc8be5-0.
INFO 03-02 00:57:13 [logger.py:42] Received request cmpl-e844118a1b704839b9429496dc2684ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:13 [async_llm.py:261] Added request cmpl-e844118a1b704839b9429496dc2684ad-0.
INFO 03-02 00:57:15 [logger.py:42] Received request cmpl-e167715c72e54c41b5ef754d5f8473d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:15 [async_llm.py:261] Added request cmpl-e167715c72e54c41b5ef754d5f8473d5-0.
INFO 03-02 00:57:16 [logger.py:42] Received request cmpl-7e1c63557e3740ca98a4acf958a69cb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:16 [async_llm.py:261] Added request cmpl-7e1c63557e3740ca98a4acf958a69cb5-0.
INFO 03-02 00:57:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:57:17 [logger.py:42] Received request cmpl-e05608ec3d0042539d5b866a1b6ae79d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:17 [async_llm.py:261] Added request cmpl-e05608ec3d0042539d5b866a1b6ae79d-0.
INFO 03-02 00:57:18 [logger.py:42] Received request cmpl-cc767232f6724ffd886b97cb694c914f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:18 [async_llm.py:261] Added request cmpl-cc767232f6724ffd886b97cb694c914f-0.
INFO 03-02 00:57:19 [logger.py:42] Received request cmpl-6d6612f93aa3451ea025a57ad4804379-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:19 [async_llm.py:261] Added request cmpl-6d6612f93aa3451ea025a57ad4804379-0.
INFO 03-02 00:57:20 [logger.py:42] Received request cmpl-049086d1767c45718ae60265b2cc7e8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:20 [async_llm.py:261] Added request cmpl-049086d1767c45718ae60265b2cc7e8f-0.
INFO 03-02 00:57:21 [logger.py:42] Received request cmpl-c1e344fe27474c3fbaa12d70fb3ff80d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:21 [async_llm.py:261] Added request cmpl-c1e344fe27474c3fbaa12d70fb3ff80d-0.
INFO 03-02 00:57:22 [logger.py:42] Received request cmpl-89dca44cac93417d84d7af3337dd344b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:22 [async_llm.py:261] Added request cmpl-89dca44cac93417d84d7af3337dd344b-0.
INFO 03-02 00:57:23 [logger.py:42] Received request cmpl-28fedb9f59b1474dad2a9de0a89660d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:23 [async_llm.py:261] Added request cmpl-28fedb9f59b1474dad2a9de0a89660d7-0.
INFO 03-02 00:57:24 [logger.py:42] Received request cmpl-ed05244c7cc24b66ac4bd5f6c193138e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:24 [async_llm.py:261] Added request cmpl-ed05244c7cc24b66ac4bd5f6c193138e-0.
INFO 03-02 00:57:25 [logger.py:42] Received request cmpl-49d9be994a334c71bbfbba69bcaadc0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:25 [async_llm.py:261] Added request cmpl-49d9be994a334c71bbfbba69bcaadc0a-0.
INFO 03-02 00:57:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:57:26 [logger.py:42] Received request cmpl-a4caf9590bc145d282a3523eb17e75af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:26 [async_llm.py:261] Added request cmpl-a4caf9590bc145d282a3523eb17e75af-0.
INFO 03-02 00:57:28 [logger.py:42] Received request cmpl-c22133d1633e4340b5e367a20fe0dee0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:28 [async_llm.py:261] Added request cmpl-c22133d1633e4340b5e367a20fe0dee0-0.
INFO 03-02 00:57:29 [logger.py:42] Received request cmpl-1ca8f6812c014f8697f6028b296fc21d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:29 [async_llm.py:261] Added request cmpl-1ca8f6812c014f8697f6028b296fc21d-0.
INFO 03-02 00:57:30 [logger.py:42] Received request cmpl-0f4a6388cfae42adb307a7f2373c31ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:30 [async_llm.py:261] Added request cmpl-0f4a6388cfae42adb307a7f2373c31ff-0.
INFO 03-02 00:57:31 [logger.py:42] Received request cmpl-bd4ea3c1ae0c42958423ccaaf903b421-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:31 [async_llm.py:261] Added request cmpl-bd4ea3c1ae0c42958423ccaaf903b421-0.
INFO 03-02 00:57:32 [logger.py:42] Received request cmpl-b2df100ea7f9495aaf1a2aa6e40ba4ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:32 [async_llm.py:261] Added request cmpl-b2df100ea7f9495aaf1a2aa6e40ba4ce-0.
INFO 03-02 00:57:33 [logger.py:42] Received request cmpl-4f3536d9596e439bace2ac2aae8bd566-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:33 [async_llm.py:261] Added request cmpl-4f3536d9596e439bace2ac2aae8bd566-0.
INFO 03-02 00:57:34 [logger.py:42] Received request cmpl-1e0a116fc0404e52ae059793ca82da0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:34 [async_llm.py:261] Added request cmpl-1e0a116fc0404e52ae059793ca82da0c-0.
INFO 03-02 00:57:35 [logger.py:42] Received request cmpl-b6e4ccffb8604eef8402e6bb0a212352-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:35 [async_llm.py:261] Added request cmpl-b6e4ccffb8604eef8402e6bb0a212352-0.
INFO 03-02 00:57:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:57:36 [logger.py:42] Received request cmpl-b6c9defedc1649f28fed49d1e9c4bbdd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:36 [async_llm.py:261] Added request cmpl-b6c9defedc1649f28fed49d1e9c4bbdd-0.
INFO 03-02 00:57:37 [logger.py:42] Received request cmpl-3e7aa8ebff2a496eb9c6cac224bac9ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:37 [async_llm.py:261] Added request cmpl-3e7aa8ebff2a496eb9c6cac224bac9ac-0.
INFO 03-02 00:57:38 [logger.py:42] Received request cmpl-89e2aab40ded4372ad0445379e325955-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:38 [async_llm.py:261] Added request cmpl-89e2aab40ded4372ad0445379e325955-0.
INFO 03-02 00:57:39 [logger.py:42] Received request cmpl-15571dcae86c40928d234db9e21f812e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:39 [async_llm.py:261] Added request cmpl-15571dcae86c40928d234db9e21f812e-0.
INFO 03-02 00:57:41 [logger.py:42] Received request cmpl-1a1ff0eae9874698bd26bcedac8188ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:41 [async_llm.py:261] Added request cmpl-1a1ff0eae9874698bd26bcedac8188ac-0.
INFO 03-02 00:57:42 [logger.py:42] Received request cmpl-e99a84d917f944f7973b1797707dc65a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:42 [async_llm.py:261] Added request cmpl-e99a84d917f944f7973b1797707dc65a-0.
INFO 03-02 00:57:43 [logger.py:42] Received request cmpl-1d2c21aa1c984076b5e1a05c05071be2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:43 [async_llm.py:261] Added request cmpl-1d2c21aa1c984076b5e1a05c05071be2-0.
INFO 03-02 00:57:44 [logger.py:42] Received request cmpl-224648cfcd314151a86f29246705c060-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:44 [async_llm.py:261] Added request cmpl-224648cfcd314151a86f29246705c060-0.
INFO 03-02 00:57:45 [logger.py:42] Received request cmpl-2688069939aa4e8fb896bc6fc1bcd3bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:45 [async_llm.py:261] Added request cmpl-2688069939aa4e8fb896bc6fc1bcd3bf-0.
INFO 03-02 00:57:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:57:46 [logger.py:42] Received request cmpl-bd03aa0a7b76469e9ec2932345df6a17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:46 [async_llm.py:261] Added request cmpl-bd03aa0a7b76469e9ec2932345df6a17-0.
INFO 03-02 00:57:47 [logger.py:42] Received request cmpl-885b97951b014eabb30c8ea207566820-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:47 [async_llm.py:261] Added request cmpl-885b97951b014eabb30c8ea207566820-0.
INFO 03-02 00:57:48 [logger.py:42] Received request cmpl-dd13be5ba93c40b1b88500a5c9d00626-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:48 [async_llm.py:261] Added request cmpl-dd13be5ba93c40b1b88500a5c9d00626-0.
INFO 03-02 00:57:49 [logger.py:42] Received request cmpl-94037fb5fb374e05a565e69e2d1c504b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:49 [async_llm.py:261] Added request cmpl-94037fb5fb374e05a565e69e2d1c504b-0.
INFO 03-02 00:57:50 [logger.py:42] Received request cmpl-e4e84156ca6c4800acec1476ffd80f01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:50 [async_llm.py:261] Added request cmpl-e4e84156ca6c4800acec1476ffd80f01-0.
INFO 03-02 00:57:51 [logger.py:42] Received request cmpl-9fd79aa886d04805ba0e0fed8ff5ae62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:51 [async_llm.py:261] Added request cmpl-9fd79aa886d04805ba0e0fed8ff5ae62-0.
INFO 03-02 00:57:52 [logger.py:42] Received request cmpl-455b5c3d22cd443ba071fc17808ec3eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:52 [async_llm.py:261] Added request cmpl-455b5c3d22cd443ba071fc17808ec3eb-0.
INFO 03-02 00:57:54 [logger.py:42] Received request cmpl-4e8dcc1b425e48c5811caec04663ec78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:54 [async_llm.py:261] Added request cmpl-4e8dcc1b425e48c5811caec04663ec78-0.
INFO 03-02 00:57:55 [logger.py:42] Received request cmpl-aeddd9246c344110857ce2db56729bfe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:55 [async_llm.py:261] Added request cmpl-aeddd9246c344110857ce2db56729bfe-0.
INFO 03-02 00:57:56 [logger.py:42] Received request cmpl-613a378a6c9c46ae83f6ad0914a76ca3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:56 [async_llm.py:261] Added request cmpl-613a378a6c9c46ae83f6ad0914a76ca3-0.
INFO 03-02 00:57:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:57:57 [logger.py:42] Received request cmpl-b5d837b23592462eacf9f0729e0d5c9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:57 [async_llm.py:261] Added request cmpl-b5d837b23592462eacf9f0729e0d5c9b-0.
INFO 03-02 00:57:58 [logger.py:42] Received request cmpl-5c439083e7184526849eb84bdf790549-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:58 [async_llm.py:261] Added request cmpl-5c439083e7184526849eb84bdf790549-0.
INFO 03-02 00:57:59 [logger.py:42] Received request cmpl-55abdfdc9225470f98e383510d954597-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:59 [async_llm.py:261] Added request cmpl-55abdfdc9225470f98e383510d954597-0.
INFO 03-02 00:58:00 [logger.py:42] Received request cmpl-6af2ed0833f8405385cd515a84934ce5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:00 [async_llm.py:261] Added request cmpl-6af2ed0833f8405385cd515a84934ce5-0.
INFO 03-02 00:58:01 [logger.py:42] Received request cmpl-79a776eff05e4a868628afbb5e354f3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:01 [async_llm.py:261] Added request cmpl-79a776eff05e4a868628afbb5e354f3b-0.
INFO 03-02 00:58:02 [logger.py:42] Received request cmpl-9932812a069a4a6da21cb159abdef806-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:02 [async_llm.py:261] Added request cmpl-9932812a069a4a6da21cb159abdef806-0.
INFO 03-02 00:58:03 [logger.py:42] Received request cmpl-ade81bc27287464eb2e616a7a0c8913f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:03 [async_llm.py:261] Added request cmpl-ade81bc27287464eb2e616a7a0c8913f-0.
INFO 03-02 00:58:04 [logger.py:42] Received request cmpl-0e6f4e1802414c969bf5e0a13feea6ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:04 [async_llm.py:261] Added request cmpl-0e6f4e1802414c969bf5e0a13feea6ee-0.
INFO 03-02 00:58:06 [logger.py:42] Received request cmpl-332217f996cd484abb62392bcd4f4582-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:06 [async_llm.py:261] Added request cmpl-332217f996cd484abb62392bcd4f4582-0.
INFO 03-02 00:58:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:58:07 [logger.py:42] Received request cmpl-221d5e04ccfc4c089bb9ac41cbaec197-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:07 [async_llm.py:261] Added request cmpl-221d5e04ccfc4c089bb9ac41cbaec197-0.
INFO 03-02 00:58:08 [logger.py:42] Received request cmpl-3da7bade72874b9c9271136ea64da620-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:08 [async_llm.py:261] Added request cmpl-3da7bade72874b9c9271136ea64da620-0.
INFO 03-02 00:58:09 [logger.py:42] Received request cmpl-611fda66bb8847a5a819655093952f11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:09 [async_llm.py:261] Added request cmpl-611fda66bb8847a5a819655093952f11-0.
INFO 03-02 00:58:10 [logger.py:42] Received request cmpl-6a8a4027ed004d19a89b92c7f30ce614-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:10 [async_llm.py:261] Added request cmpl-6a8a4027ed004d19a89b92c7f30ce614-0.
INFO 03-02 00:58:11 [logger.py:42] Received request cmpl-08f88e1ec20a4fbb9897d5fcaba9cafc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:11 [async_llm.py:261] Added request cmpl-08f88e1ec20a4fbb9897d5fcaba9cafc-0.
INFO 03-02 00:58:12 [logger.py:42] Received request cmpl-a7886e3f36da47428997dbb52d596680-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:12 [async_llm.py:261] Added request cmpl-a7886e3f36da47428997dbb52d596680-0.
INFO 03-02 00:58:13 [logger.py:42] Received request cmpl-7a040400039743db9afba761e9591380-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:13 [async_llm.py:261] Added request cmpl-7a040400039743db9afba761e9591380-0.
INFO 03-02 00:58:14 [logger.py:42] Received request cmpl-b2c6fa988e8e42ed9dc559cba3869990-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:14 [async_llm.py:261] Added request cmpl-b2c6fa988e8e42ed9dc559cba3869990-0.
INFO 03-02 00:58:15 [logger.py:42] Received request cmpl-a404f52a6169438a8a26b4df2e0ed9d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:15 [async_llm.py:261] Added request cmpl-a404f52a6169438a8a26b4df2e0ed9d1-0.
INFO 03-02 00:58:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:58:16 [logger.py:42] Received request cmpl-806670b038ce44388a73180e6737c1bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:16 [async_llm.py:261] Added request cmpl-806670b038ce44388a73180e6737c1bf-0.
INFO 03-02 00:58:17 [logger.py:42] Received request cmpl-364d749741a64e0e9038b116e4c58e1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:17 [async_llm.py:261] Added request cmpl-364d749741a64e0e9038b116e4c58e1c-0.
INFO 03-02 00:58:19 [logger.py:42] Received request cmpl-a0199a9fdd2746169f8efee47c1a90a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:19 [async_llm.py:261] Added request cmpl-a0199a9fdd2746169f8efee47c1a90a8-0.
INFO 03-02 00:58:20 [logger.py:42] Received request cmpl-2050dc4b90ff42f08508370bae6f7c4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:20 [async_llm.py:261] Added request cmpl-2050dc4b90ff42f08508370bae6f7c4f-0.
INFO 03-02 00:58:21 [logger.py:42] Received request cmpl-3c95158f5a744ee7b3d1e0b1dc940df3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:21 [async_llm.py:261] Added request cmpl-3c95158f5a744ee7b3d1e0b1dc940df3-0.
INFO 03-02 00:58:22 [logger.py:42] Received request cmpl-d22960f90e0544a28eccaa0007ad8c9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:22 [async_llm.py:261] Added request cmpl-d22960f90e0544a28eccaa0007ad8c9c-0.
INFO 03-02 00:58:23 [logger.py:42] Received request cmpl-329545cf8c574ecc9c50e1b4d70e1ef6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:23 [async_llm.py:261] Added request cmpl-329545cf8c574ecc9c50e1b4d70e1ef6-0.
INFO 03-02 00:58:24 [logger.py:42] Received request cmpl-e151e85dec1645f2b1de94d2f65c2292-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:24 [async_llm.py:261] Added request cmpl-e151e85dec1645f2b1de94d2f65c2292-0.
INFO 03-02 00:58:25 [logger.py:42] Received request cmpl-ee1e8e7f42f24c71b0cb7f42929fef72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:25 [async_llm.py:261] Added request cmpl-ee1e8e7f42f24c71b0cb7f42929fef72-0.
INFO 03-02 00:58:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:58:26 [logger.py:42] Received request cmpl-82f409e6105d44f8af1a903672b660dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:26 [async_llm.py:261] Added request cmpl-82f409e6105d44f8af1a903672b660dd-0.
INFO 03-02 00:58:27 [logger.py:42] Received request cmpl-0793bcd75e7f4eb99cc409b0850927ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:27 [async_llm.py:261] Added request cmpl-0793bcd75e7f4eb99cc409b0850927ae-0.
INFO 03-02 00:58:28 [logger.py:42] Received request cmpl-87ad4f844a0b48fea05decfb0a7570a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:28 [async_llm.py:261] Added request cmpl-87ad4f844a0b48fea05decfb0a7570a8-0.
INFO 03-02 00:58:29 [logger.py:42] Received request cmpl-9585ce6ff0d245c89959ed59930d2fc5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:29 [async_llm.py:261] Added request cmpl-9585ce6ff0d245c89959ed59930d2fc5-0.
INFO 03-02 00:58:30 [logger.py:42] Received request cmpl-6dec4ed13e114c728dcada61abb41979-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:30 [async_llm.py:261] Added request cmpl-6dec4ed13e114c728dcada61abb41979-0.
INFO 03-02 00:58:32 [logger.py:42] Received request cmpl-e9af467ee29745e1a7db8768e0d28459-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:32 [async_llm.py:261] Added request cmpl-e9af467ee29745e1a7db8768e0d28459-0.
INFO 03-02 00:58:33 [logger.py:42] Received request cmpl-310d629496c74f7daeb32dfa05ac4aa1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:33 [async_llm.py:261] Added request cmpl-310d629496c74f7daeb32dfa05ac4aa1-0.
INFO 03-02 00:58:34 [logger.py:42] Received request cmpl-e5757f37793c4cf58b4a1cd3eb326e00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:34 [async_llm.py:261] Added request cmpl-e5757f37793c4cf58b4a1cd3eb326e00-0.
INFO 03-02 00:58:35 [logger.py:42] Received request cmpl-d24acc75bf2d4196b695066e56fc1ff8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:35 [async_llm.py:261] Added request cmpl-d24acc75bf2d4196b695066e56fc1ff8-0.
INFO 03-02 00:58:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:58:36 [logger.py:42] Received request cmpl-2543d6d31dd24bdd8f9a62f9f744995e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:36 [async_llm.py:261] Added request cmpl-2543d6d31dd24bdd8f9a62f9f744995e-0.
INFO 03-02 00:58:37 [logger.py:42] Received request cmpl-02fd7d3c1e8a4404b1850e7eb45bb76f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:37 [async_llm.py:261] Added request cmpl-02fd7d3c1e8a4404b1850e7eb45bb76f-0.
INFO 03-02 00:58:38 [logger.py:42] Received request cmpl-3a634316bd574e14b4c558ce5c8dea83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:38 [async_llm.py:261] Added request cmpl-3a634316bd574e14b4c558ce5c8dea83-0.
INFO 03-02 00:58:39 [logger.py:42] Received request cmpl-920ab85381024f81bc9e4878cb23c7ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:39 [async_llm.py:261] Added request cmpl-920ab85381024f81bc9e4878cb23c7ef-0.
INFO 03-02 00:58:40 [logger.py:42] Received request cmpl-25505c62bc584b9588fc2d2240356982-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:40 [async_llm.py:261] Added request cmpl-25505c62bc584b9588fc2d2240356982-0.
INFO 03-02 00:58:41 [logger.py:42] Received request cmpl-ff635da6d31340f09b72d4985dbbe500-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:41 [async_llm.py:261] Added request cmpl-ff635da6d31340f09b72d4985dbbe500-0.
INFO 03-02 00:58:42 [logger.py:42] Received request cmpl-8231d78c147543e282303a799fff4518-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:42 [async_llm.py:261] Added request cmpl-8231d78c147543e282303a799fff4518-0.
INFO 03-02 00:58:43 [logger.py:42] Received request cmpl-196441e9672d4dcf8f22db88e4d24b76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:43 [async_llm.py:261] Added request cmpl-196441e9672d4dcf8f22db88e4d24b76-0.
INFO 03-02 00:58:45 [logger.py:42] Received request cmpl-cc86afd633144625a0f9bcda538f083d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:45 [async_llm.py:261] Added request cmpl-cc86afd633144625a0f9bcda538f083d-0.
INFO 03-02 00:58:46 [logger.py:42] Received request cmpl-13207990fa8e434a8f2a427ecdf46e06-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:46 [async_llm.py:261] Added request cmpl-13207990fa8e434a8f2a427ecdf46e06-0.
INFO 03-02 00:58:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:58:47 [logger.py:42] Received request cmpl-6389fdab02314c7c8f1bd09c64f83110-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:47 [async_llm.py:261] Added request cmpl-6389fdab02314c7c8f1bd09c64f83110-0.
INFO 03-02 00:58:48 [logger.py:42] Received request cmpl-eb4608b9a871464d99e1b05b4ae879cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:48 [async_llm.py:261] Added request cmpl-eb4608b9a871464d99e1b05b4ae879cb-0.
INFO 03-02 00:58:49 [logger.py:42] Received request cmpl-947094cade2447038449ce8e159feeff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:49 [async_llm.py:261] Added request cmpl-947094cade2447038449ce8e159feeff-0.
INFO 03-02 00:58:50 [logger.py:42] Received request cmpl-1112cd4e3707465faf4c3531c39f2dcf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:50 [async_llm.py:261] Added request cmpl-1112cd4e3707465faf4c3531c39f2dcf-0.
INFO 03-02 00:58:51 [logger.py:42] Received request cmpl-067e7e68563c4174835e67b94f5a5ef1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:51 [async_llm.py:261] Added request cmpl-067e7e68563c4174835e67b94f5a5ef1-0.
INFO 03-02 00:58:52 [logger.py:42] Received request cmpl-56e457430e9449b58ff234f18a1e504f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:52 [async_llm.py:261] Added request cmpl-56e457430e9449b58ff234f18a1e504f-0.
INFO 03-02 00:58:53 [logger.py:42] Received request cmpl-fa992de3cfaf4a13b16331e06e65ecf9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:53 [async_llm.py:261] Added request cmpl-fa992de3cfaf4a13b16331e06e65ecf9-0.
INFO 03-02 00:58:54 [logger.py:42] Received request cmpl-188d4bdd30904acc9212bb1eaf5da197-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:54 [async_llm.py:261] Added request cmpl-188d4bdd30904acc9212bb1eaf5da197-0.
INFO 03-02 00:58:55 [logger.py:42] Received request cmpl-5a45821885514d8599bc1c7d3971c155-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:55 [async_llm.py:261] Added request cmpl-5a45821885514d8599bc1c7d3971c155-0.
INFO 03-02 00:58:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:58:56 [logger.py:42] Received request cmpl-fd7dd445373e4951805b767f6056fb59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:56 [async_llm.py:261] Added request cmpl-fd7dd445373e4951805b767f6056fb59-0.
INFO 03-02 00:58:58 [logger.py:42] Received request cmpl-d7c44b09a53e43448fe7b09fe94c3a6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:58 [async_llm.py:261] Added request cmpl-d7c44b09a53e43448fe7b09fe94c3a6d-0.
INFO 03-02 00:58:59 [logger.py:42] Received request cmpl-3188a2290f994464b329fa48006e343f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:59 [async_llm.py:261] Added request cmpl-3188a2290f994464b329fa48006e343f-0.
INFO 03-02 00:59:00 [logger.py:42] Received request cmpl-dc0ca7aef6214c109bd319db72d1a01b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:00 [async_llm.py:261] Added request cmpl-dc0ca7aef6214c109bd319db72d1a01b-0.
INFO 03-02 00:59:01 [logger.py:42] Received request cmpl-5dd620269ed34839a1f6434eac7015b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:01 [async_llm.py:261] Added request cmpl-5dd620269ed34839a1f6434eac7015b8-0.
INFO 03-02 00:59:02 [logger.py:42] Received request cmpl-b4371da9bc8146549f619a1f0654a9d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:02 [async_llm.py:261] Added request cmpl-b4371da9bc8146549f619a1f0654a9d7-0.
INFO 03-02 00:59:03 [logger.py:42] Received request cmpl-f0f016636153423caa8e8818b7dcaf73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:03 [async_llm.py:261] Added request cmpl-f0f016636153423caa8e8818b7dcaf73-0.
INFO 03-02 00:59:04 [logger.py:42] Received request cmpl-c19f71e4453b4a78842c997b12dacd9a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:04 [async_llm.py:261] Added request cmpl-c19f71e4453b4a78842c997b12dacd9a-0.
INFO 03-02 00:59:05 [logger.py:42] Received request cmpl-63eed9c8c512421da9c2692f58d071a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:05 [async_llm.py:261] Added request cmpl-63eed9c8c512421da9c2692f58d071a9-0.
INFO 03-02 00:59:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:59:06 [logger.py:42] Received request cmpl-2f58a15f3f1d4f66ab046539b31cfdf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:06 [async_llm.py:261] Added request cmpl-2f58a15f3f1d4f66ab046539b31cfdf4-0.
INFO 03-02 00:59:07 [logger.py:42] Received request cmpl-bc54b6bac16f4ad8bf17f8859f51dfd6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:07 [async_llm.py:261] Added request cmpl-bc54b6bac16f4ad8bf17f8859f51dfd6-0.
INFO 03-02 00:59:08 [logger.py:42] Received request cmpl-26d854082b0e4655be3254eeb4d4287f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:08 [async_llm.py:261] Added request cmpl-26d854082b0e4655be3254eeb4d4287f-0.
INFO 03-02 00:59:09 [logger.py:42] Received request cmpl-e8243669f4dc47dc88dd744d1832d015-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:09 [async_llm.py:261] Added request cmpl-e8243669f4dc47dc88dd744d1832d015-0.
INFO 03-02 00:59:11 [logger.py:42] Received request cmpl-a492e9c1b4c84edb9ed7bec9b6dc2f47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:11 [async_llm.py:261] Added request cmpl-a492e9c1b4c84edb9ed7bec9b6dc2f47-0.
INFO 03-02 00:59:12 [logger.py:42] Received request cmpl-332a50cad5574911ba6863ed9fa61ee7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:12 [async_llm.py:261] Added request cmpl-332a50cad5574911ba6863ed9fa61ee7-0.
INFO 03-02 00:59:13 [logger.py:42] Received request cmpl-da1b86adf3d342508da83ec96b1f673b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:13 [async_llm.py:261] Added request cmpl-da1b86adf3d342508da83ec96b1f673b-0.
INFO 03-02 00:59:14 [logger.py:42] Received request cmpl-68a36166eb004d149fcd2fb969497992-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:14 [async_llm.py:261] Added request cmpl-68a36166eb004d149fcd2fb969497992-0.
INFO 03-02 00:59:15 [logger.py:42] Received request cmpl-6c35b55cdc8d4664a9c3ddef9c2c16ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:15 [async_llm.py:261] Added request cmpl-6c35b55cdc8d4664a9c3ddef9c2c16ea-0.
INFO 03-02 00:59:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:59:16 [logger.py:42] Received request cmpl-fb88ad9172cc4c17b5a05213ad4a0bd0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:16 [async_llm.py:261] Added request cmpl-fb88ad9172cc4c17b5a05213ad4a0bd0-0.
INFO 03-02 00:59:17 [logger.py:42] Received request cmpl-018b8a85d6f74392a86299ecbea547bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:17 [async_llm.py:261] Added request cmpl-018b8a85d6f74392a86299ecbea547bd-0.
INFO 03-02 00:59:18 [logger.py:42] Received request cmpl-e8bec46a92004ac5882fa42a5d7c2a55-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:18 [async_llm.py:261] Added request cmpl-e8bec46a92004ac5882fa42a5d7c2a55-0.
INFO 03-02 00:59:19 [logger.py:42] Received request cmpl-3caef3cd2de04e14aa2ccfad0ededfe6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:19 [async_llm.py:261] Added request cmpl-3caef3cd2de04e14aa2ccfad0ededfe6-0.
INFO 03-02 00:59:20 [logger.py:42] Received request cmpl-660b0d87c3fd47d993f9bf75b424e316-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:20 [async_llm.py:261] Added request cmpl-660b0d87c3fd47d993f9bf75b424e316-0.
INFO 03-02 00:59:21 [logger.py:42] Received request cmpl-e9d4c645a10842b1bb66ee5887bfdce2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:21 [async_llm.py:261] Added request cmpl-e9d4c645a10842b1bb66ee5887bfdce2-0.
INFO 03-02 00:59:22 [logger.py:42] Received request cmpl-0918787e4a42400d8aeb5a45a4ea091b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:22 [async_llm.py:261] Added request cmpl-0918787e4a42400d8aeb5a45a4ea091b-0.
INFO 03-02 00:59:24 [logger.py:42] Received request cmpl-c5bc90a0a67e464db3aff2c49bbfb40a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:24 [async_llm.py:261] Added request cmpl-c5bc90a0a67e464db3aff2c49bbfb40a-0.
INFO 03-02 00:59:25 [logger.py:42] Received request cmpl-487cda6a8dd6403f8d15c963e004fe06-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:25 [async_llm.py:261] Added request cmpl-487cda6a8dd6403f8d15c963e004fe06-0.
INFO 03-02 00:59:26 [logger.py:42] Received request cmpl-16d3dfac2ba84911bc4085b3ddeb138d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:26 [async_llm.py:261] Added request cmpl-16d3dfac2ba84911bc4085b3ddeb138d-0.
INFO 03-02 00:59:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:59:27 [logger.py:42] Received request cmpl-57771888d75a4507a207f87bb09c4b93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:27 [async_llm.py:261] Added request cmpl-57771888d75a4507a207f87bb09c4b93-0.
INFO 03-02 00:59:28 [logger.py:42] Received request cmpl-81eff1afa02e4fb9a43baee288bc94cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:28 [async_llm.py:261] Added request cmpl-81eff1afa02e4fb9a43baee288bc94cf-0.
INFO 03-02 00:59:29 [logger.py:42] Received request cmpl-5f25114fb4a94a37b8cfc8fc8069bfce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:29 [async_llm.py:261] Added request cmpl-5f25114fb4a94a37b8cfc8fc8069bfce-0.
INFO 03-02 00:59:30 [logger.py:42] Received request cmpl-a48e44d7114e4cd191be4f2e2ff54003-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:30 [async_llm.py:261] Added request cmpl-a48e44d7114e4cd191be4f2e2ff54003-0.
INFO 03-02 00:59:31 [logger.py:42] Received request cmpl-d23a41f6835c4957b80bfa205050eb01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:31 [async_llm.py:261] Added request cmpl-d23a41f6835c4957b80bfa205050eb01-0.
INFO 03-02 00:59:32 [logger.py:42] Received request cmpl-63019a16ac6044ee976e3d1464ef16dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:32 [async_llm.py:261] Added request cmpl-63019a16ac6044ee976e3d1464ef16dd-0.
INFO 03-02 00:59:33 [logger.py:42] Received request cmpl-0ae98953da7b4f2d9fcca8f4102810fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:33 [async_llm.py:261] Added request cmpl-0ae98953da7b4f2d9fcca8f4102810fc-0.
INFO 03-02 00:59:34 [logger.py:42] Received request cmpl-15b5092549aa4e44b07bb3208955dc46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:34 [async_llm.py:261] Added request cmpl-15b5092549aa4e44b07bb3208955dc46-0.
INFO 03-02 00:59:35 [logger.py:42] Received request cmpl-e94c5b37374c47c0ba35b7f93a5277c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:35 [async_llm.py:261] Added request cmpl-e94c5b37374c47c0ba35b7f93a5277c7-0.
INFO 03-02 00:59:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:59:37 [logger.py:42] Received request cmpl-f4ba63e75fa146de9f135f46b7471440-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:37 [async_llm.py:261] Added request cmpl-f4ba63e75fa146de9f135f46b7471440-0.
INFO 03-02 00:59:38 [logger.py:42] Received request cmpl-26ba986727b64caa8845918d70db48e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:38 [async_llm.py:261] Added request cmpl-26ba986727b64caa8845918d70db48e9-0.
INFO 03-02 00:59:39 [logger.py:42] Received request cmpl-d548c0994e7e4926b6dca00374d32a97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:39 [async_llm.py:261] Added request cmpl-d548c0994e7e4926b6dca00374d32a97-0.
INFO 03-02 00:59:40 [logger.py:42] Received request cmpl-78672f67d26948ef945b872d86d684ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:40 [async_llm.py:261] Added request cmpl-78672f67d26948ef945b872d86d684ab-0.
INFO 03-02 00:59:41 [logger.py:42] Received request cmpl-0b7f5f36e3bf44d694c2d441eeeeee85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:41 [async_llm.py:261] Added request cmpl-0b7f5f36e3bf44d694c2d441eeeeee85-0.
INFO 03-02 00:59:42 [logger.py:42] Received request cmpl-e9009d57af0347f793b2aceb08b65b22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:42 [async_llm.py:261] Added request cmpl-e9009d57af0347f793b2aceb08b65b22-0.
INFO 03-02 00:59:43 [logger.py:42] Received request cmpl-b19b7c48126242b4b529f5c10ba40c72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:43 [async_llm.py:261] Added request cmpl-b19b7c48126242b4b529f5c10ba40c72-0.
INFO 03-02 00:59:44 [logger.py:42] Received request cmpl-4dc7f5f65abd48e3862215ea04459c6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:44 [async_llm.py:261] Added request cmpl-4dc7f5f65abd48e3862215ea04459c6f-0.
INFO 03-02 00:59:45 [logger.py:42] Received request cmpl-e9f36abd8fb54892a762b4d3a3a19cd4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:45 [async_llm.py:261] Added request cmpl-e9f36abd8fb54892a762b4d3a3a19cd4-0.
INFO 03-02 00:59:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:59:46 [logger.py:42] Received request cmpl-0c8efb77fc6544858c38638074267c90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:46 [async_llm.py:261] Added request cmpl-0c8efb77fc6544858c38638074267c90-0.
INFO 03-02 00:59:47 [logger.py:42] Received request cmpl-3c260be0a2b443fa94ef4208e905b625-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:47 [async_llm.py:261] Added request cmpl-3c260be0a2b443fa94ef4208e905b625-0.
INFO 03-02 00:59:48 [logger.py:42] Received request cmpl-b0fdf0b10e934672bc3bf45422417046-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:48 [async_llm.py:261] Added request cmpl-b0fdf0b10e934672bc3bf45422417046-0.
INFO 03-02 00:59:50 [logger.py:42] Received request cmpl-6225bca38e7d439ca8b5b79ea4ac965c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:50 [async_llm.py:261] Added request cmpl-6225bca38e7d439ca8b5b79ea4ac965c-0.
INFO 03-02 00:59:51 [logger.py:42] Received request cmpl-a9d684deaf7b454cbafa3656fffe21b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:51 [async_llm.py:261] Added request cmpl-a9d684deaf7b454cbafa3656fffe21b9-0.
INFO 03-02 00:59:52 [logger.py:42] Received request cmpl-d4658ee32b24461e919e859aadcaad34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:52 [async_llm.py:261] Added request cmpl-d4658ee32b24461e919e859aadcaad34-0.
INFO 03-02 00:59:53 [logger.py:42] Received request cmpl-64e1e32a52794c85a66b7d2c66b60f72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:53 [async_llm.py:261] Added request cmpl-64e1e32a52794c85a66b7d2c66b60f72-0.
INFO 03-02 00:59:54 [logger.py:42] Received request cmpl-fb2bd91731e046d5bb710a4f2d285afe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:54 [async_llm.py:261] Added request cmpl-fb2bd91731e046d5bb710a4f2d285afe-0.
INFO 03-02 00:59:55 [logger.py:42] Received request cmpl-9952094d3a9a4b6ea51fb861f6a9f6fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:55 [async_llm.py:261] Added request cmpl-9952094d3a9a4b6ea51fb861f6a9f6fd-0.
INFO 03-02 00:59:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 00:59:56 [logger.py:42] Received request cmpl-0e0e1d1427d0400196df3818858176c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:56 [async_llm.py:261] Added request cmpl-0e0e1d1427d0400196df3818858176c9-0.
INFO 03-02 00:59:57 [logger.py:42] Received request cmpl-7e51457766fa485aa8b9a20eda9d9da9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:57 [async_llm.py:261] Added request cmpl-7e51457766fa485aa8b9a20eda9d9da9-0.
INFO 03-02 00:59:58 [logger.py:42] Received request cmpl-e535b318d31b43f0b27dffd284267567-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:58 [async_llm.py:261] Added request cmpl-e535b318d31b43f0b27dffd284267567-0.
INFO 03-02 00:59:59 [logger.py:42] Received request cmpl-021b002d3954409ba12f1c47e6c49669-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:59 [async_llm.py:261] Added request cmpl-021b002d3954409ba12f1c47e6c49669-0.
INFO 03-02 01:00:00 [logger.py:42] Received request cmpl-2e9f224c6013458b9d2a29f7d203c2d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:00 [async_llm.py:261] Added request cmpl-2e9f224c6013458b9d2a29f7d203c2d1-0.
INFO 03-02 01:00:01 [logger.py:42] Received request cmpl-248e5a37778f47fc9b998bf763152a8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:01 [async_llm.py:261] Added request cmpl-248e5a37778f47fc9b998bf763152a8b-0.
INFO 03-02 01:00:03 [logger.py:42] Received request cmpl-2f056d434cc7436184cb268c48c0e003-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:03 [async_llm.py:261] Added request cmpl-2f056d434cc7436184cb268c48c0e003-0.
INFO 03-02 01:00:04 [logger.py:42] Received request cmpl-febbddb55e674922ab9bb4787661a92b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:04 [async_llm.py:261] Added request cmpl-febbddb55e674922ab9bb4787661a92b-0.
INFO 03-02 01:00:05 [logger.py:42] Received request cmpl-c860aecb9d56447496a4d50e3f8b94d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:05 [async_llm.py:261] Added request cmpl-c860aecb9d56447496a4d50e3f8b94d0-0.
INFO 03-02 01:00:06 [logger.py:42] Received request cmpl-328f7feb78f94b26adeb288b59bd1a01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:06 [async_llm.py:261] Added request cmpl-328f7feb78f94b26adeb288b59bd1a01-0.
INFO 03-02 01:00:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:00:07 [logger.py:42] Received request cmpl-d5a832b091e54b0b804c0afc8200ec0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:07 [async_llm.py:261] Added request cmpl-d5a832b091e54b0b804c0afc8200ec0e-0.
INFO 03-02 01:00:08 [logger.py:42] Received request cmpl-7ae9948cd3a34b92a462908071d853dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:08 [async_llm.py:261] Added request cmpl-7ae9948cd3a34b92a462908071d853dc-0.
INFO 03-02 01:00:09 [logger.py:42] Received request cmpl-345b8744b35a4fd4ace342b65736ed39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:09 [async_llm.py:261] Added request cmpl-345b8744b35a4fd4ace342b65736ed39-0.
INFO 03-02 01:00:10 [logger.py:42] Received request cmpl-e00f661901a3437a8a968cbd6d2255aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:10 [async_llm.py:261] Added request cmpl-e00f661901a3437a8a968cbd6d2255aa-0.
INFO 03-02 01:00:11 [logger.py:42] Received request cmpl-dd6e2bada5f54532bf8eb97c42272928-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:11 [async_llm.py:261] Added request cmpl-dd6e2bada5f54532bf8eb97c42272928-0.
INFO 03-02 01:00:12 [logger.py:42] Received request cmpl-83f24ec467254c76b4db28e5fa10234a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:12 [async_llm.py:261] Added request cmpl-83f24ec467254c76b4db28e5fa10234a-0.
INFO 03-02 01:00:13 [logger.py:42] Received request cmpl-c393025ff7334f4fbfe11135746ce736-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:13 [async_llm.py:261] Added request cmpl-c393025ff7334f4fbfe11135746ce736-0.
INFO 03-02 01:00:14 [logger.py:42] Received request cmpl-c2b20df8698443b6822ac92703fa4979-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:14 [async_llm.py:261] Added request cmpl-c2b20df8698443b6822ac92703fa4979-0.
INFO 03-02 01:00:16 [logger.py:42] Received request cmpl-a8f5d9dbaa10431186d73526c34fd92f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:16 [async_llm.py:261] Added request cmpl-a8f5d9dbaa10431186d73526c34fd92f-0.
INFO 03-02 01:00:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:00:17 [logger.py:42] Received request cmpl-c9573a92c47d4feb92c67034594564c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:17 [async_llm.py:261] Added request cmpl-c9573a92c47d4feb92c67034594564c0-0.
INFO 03-02 01:00:18 [logger.py:42] Received request cmpl-dd309fc9854a47449ad3c19c3d0a92bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:18 [async_llm.py:261] Added request cmpl-dd309fc9854a47449ad3c19c3d0a92bb-0.
INFO 03-02 01:00:19 [logger.py:42] Received request cmpl-e065fa3346734eb59883b697824d2d78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:19 [async_llm.py:261] Added request cmpl-e065fa3346734eb59883b697824d2d78-0.
INFO 03-02 01:00:20 [logger.py:42] Received request cmpl-bee2a42c7a5f4cd9add76de6c442bddd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:20 [async_llm.py:261] Added request cmpl-bee2a42c7a5f4cd9add76de6c442bddd-0.
INFO 03-02 01:00:21 [logger.py:42] Received request cmpl-6d2c0a657d664095a83ef055af27b0fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:21 [async_llm.py:261] Added request cmpl-6d2c0a657d664095a83ef055af27b0fe-0.
INFO 03-02 01:00:22 [logger.py:42] Received request cmpl-8192320ccab64403bb300e43231760d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:22 [async_llm.py:261] Added request cmpl-8192320ccab64403bb300e43231760d8-0.
INFO 03-02 01:00:23 [logger.py:42] Received request cmpl-1332403e35824d02a59be9bb9c00f0ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:23 [async_llm.py:261] Added request cmpl-1332403e35824d02a59be9bb9c00f0ad-0.
INFO 03-02 01:00:24 [logger.py:42] Received request cmpl-7bc874f6678d4685aacde0be5e4e735e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:24 [async_llm.py:261] Added request cmpl-7bc874f6678d4685aacde0be5e4e735e-0.
INFO 03-02 01:00:25 [logger.py:42] Received request cmpl-52ac95f8c4d444c987d823eba16a36e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:25 [async_llm.py:261] Added request cmpl-52ac95f8c4d444c987d823eba16a36e7-0.
INFO 03-02 01:00:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:00:26 [logger.py:42] Received request cmpl-72f73ffe2f1f4946a7d2d278ee052f42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:26 [async_llm.py:261] Added request cmpl-72f73ffe2f1f4946a7d2d278ee052f42-0.
INFO 03-02 01:00:27 [logger.py:42] Received request cmpl-3485028b3b3b46539e9d6f238b3fb2d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:28 [async_llm.py:261] Added request cmpl-3485028b3b3b46539e9d6f238b3fb2d2-0.
INFO 03-02 01:00:29 [logger.py:42] Received request cmpl-4af24d380b524f698292f55c5657b11d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:29 [async_llm.py:261] Added request cmpl-4af24d380b524f698292f55c5657b11d-0.
INFO 03-02 01:00:30 [logger.py:42] Received request cmpl-7faaff31158b484d818a7a7b0a7d25b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:30 [async_llm.py:261] Added request cmpl-7faaff31158b484d818a7a7b0a7d25b4-0.
INFO 03-02 01:00:31 [logger.py:42] Received request cmpl-e28d6719788643b4a7e3a470509156c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:31 [async_llm.py:261] Added request cmpl-e28d6719788643b4a7e3a470509156c6-0.
INFO 03-02 01:00:32 [logger.py:42] Received request cmpl-82bb74762dc641eea2c1472349aec653-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:32 [async_llm.py:261] Added request cmpl-82bb74762dc641eea2c1472349aec653-0.
INFO 03-02 01:00:33 [logger.py:42] Received request cmpl-2b7e63d9bdf04612a013708802938d86-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:33 [async_llm.py:261] Added request cmpl-2b7e63d9bdf04612a013708802938d86-0.
INFO 03-02 01:00:34 [logger.py:42] Received request cmpl-8cf54b87157843f29ebbca5918f0320d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:34 [async_llm.py:261] Added request cmpl-8cf54b87157843f29ebbca5918f0320d-0.
INFO 03-02 01:00:35 [logger.py:42] Received request cmpl-b937704926094679a8ff370703135a0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:35 [async_llm.py:261] Added request cmpl-b937704926094679a8ff370703135a0c-0.
INFO 03-02 01:00:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:00:36 [logger.py:42] Received request cmpl-7d3d2652b402431c9e92b589369468f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:36 [async_llm.py:261] Added request cmpl-7d3d2652b402431c9e92b589369468f6-0.
INFO 03-02 01:00:37 [logger.py:42] Received request cmpl-3e3f4b37f10448179123359db6a146e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:37 [async_llm.py:261] Added request cmpl-3e3f4b37f10448179123359db6a146e1-0.
INFO 03-02 01:00:38 [logger.py:42] Received request cmpl-f00eb5fd2a9f446a89c12fb2f2731273-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:38 [async_llm.py:261] Added request cmpl-f00eb5fd2a9f446a89c12fb2f2731273-0.
INFO 03-02 01:00:39 [logger.py:42] Received request cmpl-3d1c28e441b84914af1a534ba17ee19b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:39 [async_llm.py:261] Added request cmpl-3d1c28e441b84914af1a534ba17ee19b-0.
INFO 03-02 01:00:41 [logger.py:42] Received request cmpl-079337311de34d0f9f7f89204f2dab82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:41 [async_llm.py:261] Added request cmpl-079337311de34d0f9f7f89204f2dab82-0.
INFO 03-02 01:00:42 [logger.py:42] Received request cmpl-b46981dd9f8e487499c1c79d1635627f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:42 [async_llm.py:261] Added request cmpl-b46981dd9f8e487499c1c79d1635627f-0.
INFO 03-02 01:00:43 [logger.py:42] Received request cmpl-864331d56c1846538f04bfd585c03fa9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:43 [async_llm.py:261] Added request cmpl-864331d56c1846538f04bfd585c03fa9-0.
INFO 03-02 01:00:44 [logger.py:42] Received request cmpl-050d87d9f8bc4c4fa73eae62b17fb845-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:44 [async_llm.py:261] Added request cmpl-050d87d9f8bc4c4fa73eae62b17fb845-0.
INFO 03-02 01:00:45 [logger.py:42] Received request cmpl-3add31647e5f4204ae2f931ab626e3d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:45 [async_llm.py:261] Added request cmpl-3add31647e5f4204ae2f931ab626e3d2-0.
INFO 03-02 01:00:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:00:46 [logger.py:42] Received request cmpl-0e8da70dac914d0389f78b59a6d91d11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:46 [async_llm.py:261] Added request cmpl-0e8da70dac914d0389f78b59a6d91d11-0.
INFO 03-02 01:00:47 [logger.py:42] Received request cmpl-155da4e9662340d0ac9b26b2813fe572-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:47 [async_llm.py:261] Added request cmpl-155da4e9662340d0ac9b26b2813fe572-0.
INFO 03-02 01:00:48 [logger.py:42] Received request cmpl-f490939df69e438484d26a2e9f536e50-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:48 [async_llm.py:261] Added request cmpl-f490939df69e438484d26a2e9f536e50-0.
INFO 03-02 01:00:49 [logger.py:42] Received request cmpl-3b5cf5502eb64ac0a4acdd3b3f64f2d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:49 [async_llm.py:261] Added request cmpl-3b5cf5502eb64ac0a4acdd3b3f64f2d3-0.
INFO 03-02 01:00:50 [logger.py:42] Received request cmpl-9d38c65763f8491fb03916adb3354a76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:50 [async_llm.py:261] Added request cmpl-9d38c65763f8491fb03916adb3354a76-0.
INFO 03-02 01:00:51 [logger.py:42] Received request cmpl-61f399f88daf4eb0922d4287edb9298e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:51 [async_llm.py:261] Added request cmpl-61f399f88daf4eb0922d4287edb9298e-0.
INFO 03-02 01:00:52 [logger.py:42] Received request cmpl-9f93c167b73142d9a6409c15d557f83e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:52 [async_llm.py:261] Added request cmpl-9f93c167b73142d9a6409c15d557f83e-0.
INFO 03-02 01:00:54 [logger.py:42] Received request cmpl-11cd005d88c84dc4b85ffde16dcf01c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:54 [async_llm.py:261] Added request cmpl-11cd005d88c84dc4b85ffde16dcf01c5-0.
INFO 03-02 01:00:55 [logger.py:42] Received request cmpl-8a39cbf8653c417abd4abe9965893786-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:55 [async_llm.py:261] Added request cmpl-8a39cbf8653c417abd4abe9965893786-0.
INFO 03-02 01:00:56 [logger.py:42] Received request cmpl-e259dd1aadb540679033da218d43b482-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:56 [async_llm.py:261] Added request cmpl-e259dd1aadb540679033da218d43b482-0.
INFO 03-02 01:00:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:00:57 [logger.py:42] Received request cmpl-fdc91fedf58847b780b0dfcdbe97bf34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:57 [async_llm.py:261] Added request cmpl-fdc91fedf58847b780b0dfcdbe97bf34-0.
INFO 03-02 01:00:58 [logger.py:42] Received request cmpl-5fbdd7e490994871b8994d1709ec1d6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:58 [async_llm.py:261] Added request cmpl-5fbdd7e490994871b8994d1709ec1d6b-0.
INFO 03-02 01:00:59 [logger.py:42] Received request cmpl-5eedbc5262994516a1dca7533ee5b321-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:59 [async_llm.py:261] Added request cmpl-5eedbc5262994516a1dca7533ee5b321-0.
INFO 03-02 01:01:00 [logger.py:42] Received request cmpl-6add5287299f42edb0e601e3a83d421a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:00 [async_llm.py:261] Added request cmpl-6add5287299f42edb0e601e3a83d421a-0.
INFO 03-02 01:01:01 [logger.py:42] Received request cmpl-3c9e9a1ead5e46d79a5e5d5ee00f4ad5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:01 [async_llm.py:261] Added request cmpl-3c9e9a1ead5e46d79a5e5d5ee00f4ad5-0.
INFO 03-02 01:01:02 [logger.py:42] Received request cmpl-799746e30c1449f389dda656f0faaf5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:02 [async_llm.py:261] Added request cmpl-799746e30c1449f389dda656f0faaf5d-0.
INFO 03-02 01:01:03 [logger.py:42] Received request cmpl-b34b779e2bde473a84ca46e11d296a1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:03 [async_llm.py:261] Added request cmpl-b34b779e2bde473a84ca46e11d296a1c-0.
INFO 03-02 01:01:04 [logger.py:42] Received request cmpl-8fef899851bf4e39a7fcac606fe2984a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:04 [async_llm.py:261] Added request cmpl-8fef899851bf4e39a7fcac606fe2984a-0.
INFO 03-02 01:01:05 [logger.py:42] Received request cmpl-d12f4fe805134e9ba6433c1fe7a172bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:05 [async_llm.py:261] Added request cmpl-d12f4fe805134e9ba6433c1fe7a172bd-0.
INFO 03-02 01:01:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:01:07 [logger.py:42] Received request cmpl-1450251598274c16bdae76425a972d62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:07 [async_llm.py:261] Added request cmpl-1450251598274c16bdae76425a972d62-0.
INFO 03-02 01:01:08 [logger.py:42] Received request cmpl-51f7e6ce83da448fa967285110f69e78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:08 [async_llm.py:261] Added request cmpl-51f7e6ce83da448fa967285110f69e78-0.
INFO 03-02 01:01:09 [logger.py:42] Received request cmpl-c200c03baaf64780852cd584bb41dadc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:09 [async_llm.py:261] Added request cmpl-c200c03baaf64780852cd584bb41dadc-0.
INFO 03-02 01:01:10 [logger.py:42] Received request cmpl-ea4055b04a3842409cb456ddc61ad4ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:10 [async_llm.py:261] Added request cmpl-ea4055b04a3842409cb456ddc61ad4ce-0.
INFO 03-02 01:01:11 [logger.py:42] Received request cmpl-00289f0f544c442b939ac9e37e0773ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:11 [async_llm.py:261] Added request cmpl-00289f0f544c442b939ac9e37e0773ab-0.
INFO 03-02 01:01:12 [logger.py:42] Received request cmpl-07293dc171f8496cbc904cd4c060785a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:12 [async_llm.py:261] Added request cmpl-07293dc171f8496cbc904cd4c060785a-0.
INFO 03-02 01:01:13 [logger.py:42] Received request cmpl-86a6e5b917ea444daf753ce43f8f6c5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:13 [async_llm.py:261] Added request cmpl-86a6e5b917ea444daf753ce43f8f6c5d-0.
INFO 03-02 01:01:14 [logger.py:42] Received request cmpl-bcebb0a885b04905a6683662e17e7d2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:14 [async_llm.py:261] Added request cmpl-bcebb0a885b04905a6683662e17e7d2b-0.
INFO 03-02 01:01:15 [logger.py:42] Received request cmpl-9f9a27b2e5904fceacf67d7f0560275c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:15 [async_llm.py:261] Added request cmpl-9f9a27b2e5904fceacf67d7f0560275c-0.
INFO 03-02 01:01:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:01:16 [logger.py:42] Received request cmpl-a778281704bf4eb98545c34bc809694a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:16 [async_llm.py:261] Added request cmpl-a778281704bf4eb98545c34bc809694a-0.
INFO 03-02 01:01:17 [logger.py:42] Received request cmpl-da7e7d8bc7474380b187a5e2b78fcd65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:17 [async_llm.py:261] Added request cmpl-da7e7d8bc7474380b187a5e2b78fcd65-0.
INFO 03-02 01:01:18 [logger.py:42] Received request cmpl-a57ab07e2c3645d18093ebac7c5f5b58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:18 [async_llm.py:261] Added request cmpl-a57ab07e2c3645d18093ebac7c5f5b58-0.
INFO 03-02 01:01:20 [logger.py:42] Received request cmpl-e0cd33a08b6b4ab2b86ac8f23209240b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:20 [async_llm.py:261] Added request cmpl-e0cd33a08b6b4ab2b86ac8f23209240b-0.
INFO 03-02 01:01:21 [logger.py:42] Received request cmpl-cc59b2216a484ced8225cf6a5471e9db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:21 [async_llm.py:261] Added request cmpl-cc59b2216a484ced8225cf6a5471e9db-0.
INFO 03-02 01:01:22 [logger.py:42] Received request cmpl-c8951349440644f3bf417a19098664a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:22 [async_llm.py:261] Added request cmpl-c8951349440644f3bf417a19098664a9-0.
INFO 03-02 01:01:23 [logger.py:42] Received request cmpl-be45f03105aa4be59e78cd47426871d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:23 [async_llm.py:261] Added request cmpl-be45f03105aa4be59e78cd47426871d7-0.
INFO 03-02 01:01:24 [logger.py:42] Received request cmpl-42e6f312082c42e0878b13ae1388fb5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:24 [async_llm.py:261] Added request cmpl-42e6f312082c42e0878b13ae1388fb5f-0.
INFO 03-02 01:01:25 [logger.py:42] Received request cmpl-d25b6fcc57574a99bad3734bae320aff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:25 [async_llm.py:261] Added request cmpl-d25b6fcc57574a99bad3734bae320aff-0.
INFO 03-02 01:01:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:01:26 [logger.py:42] Received request cmpl-f6f8db085f954c378848da12c3eeae8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:26 [async_llm.py:261] Added request cmpl-f6f8db085f954c378848da12c3eeae8e-0.
INFO 03-02 01:01:27 [logger.py:42] Received request cmpl-ecd4d96d067e4f15b08b726449868398-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:27 [async_llm.py:261] Added request cmpl-ecd4d96d067e4f15b08b726449868398-0.
INFO 03-02 01:01:28 [logger.py:42] Received request cmpl-4513cb0c377d4d75bd9a3cf1ee333e41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:28 [async_llm.py:261] Added request cmpl-4513cb0c377d4d75bd9a3cf1ee333e41-0.
INFO 03-02 01:01:29 [logger.py:42] Received request cmpl-b6c24e2e2482444ebdf21455db6db401-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:29 [async_llm.py:261] Added request cmpl-b6c24e2e2482444ebdf21455db6db401-0.
INFO 03-02 01:01:30 [logger.py:42] Received request cmpl-5318e7fbe3064cbda4637783618ff9f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:30 [async_llm.py:261] Added request cmpl-5318e7fbe3064cbda4637783618ff9f3-0.
INFO 03-02 01:01:31 [logger.py:42] Received request cmpl-43147ea3dabd4e3186868031274d0032-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:31 [async_llm.py:261] Added request cmpl-43147ea3dabd4e3186868031274d0032-0.
INFO 03-02 01:01:33 [logger.py:42] Received request cmpl-e69e8bcd4e5c45068bb72c147eb92427-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:33 [async_llm.py:261] Added request cmpl-e69e8bcd4e5c45068bb72c147eb92427-0.
INFO 03-02 01:01:34 [logger.py:42] Received request cmpl-afb6d3d20d264446b8d4eadc7401f534-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:34 [async_llm.py:261] Added request cmpl-afb6d3d20d264446b8d4eadc7401f534-0.
INFO 03-02 01:01:35 [logger.py:42] Received request cmpl-b33c3b953c5444418a449144cda9ef73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:35 [async_llm.py:261] Added request cmpl-b33c3b953c5444418a449144cda9ef73-0.
INFO 03-02 01:01:36 [logger.py:42] Received request cmpl-6f1a67ecfa8547ee90a711ee22672592-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:36 [async_llm.py:261] Added request cmpl-6f1a67ecfa8547ee90a711ee22672592-0.
INFO 03-02 01:01:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:01:37 [logger.py:42] Received request cmpl-9099f3f0e6ab44a298b1e34bc2fac98d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:37 [async_llm.py:261] Added request cmpl-9099f3f0e6ab44a298b1e34bc2fac98d-0.
INFO 03-02 01:01:38 [logger.py:42] Received request cmpl-ae716b0851c44e739e4a762381066add-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:38 [async_llm.py:261] Added request cmpl-ae716b0851c44e739e4a762381066add-0.
INFO 03-02 01:01:39 [logger.py:42] Received request cmpl-bdda395f8ad74006b8a80de9f0fe74bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:39 [async_llm.py:261] Added request cmpl-bdda395f8ad74006b8a80de9f0fe74bc-0.
INFO 03-02 01:01:40 [logger.py:42] Received request cmpl-4dd7cdf88e3140cbb00b6e64cdfc3403-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:40 [async_llm.py:261] Added request cmpl-4dd7cdf88e3140cbb00b6e64cdfc3403-0.
INFO 03-02 01:01:41 [logger.py:42] Received request cmpl-f6c035d66179463eae18b32b9be0321f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:41 [async_llm.py:261] Added request cmpl-f6c035d66179463eae18b32b9be0321f-0.
INFO 03-02 01:01:42 [logger.py:42] Received request cmpl-deaa7ae7164349088a83064b673e41c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:42 [async_llm.py:261] Added request cmpl-deaa7ae7164349088a83064b673e41c6-0.
INFO 03-02 01:01:43 [logger.py:42] Received request cmpl-3b4445c863e34687ad96996e01704e55-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:43 [async_llm.py:261] Added request cmpl-3b4445c863e34687ad96996e01704e55-0.
INFO 03-02 01:01:44 [logger.py:42] Received request cmpl-d0aded78f0244965876500d99faea5be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:44 [async_llm.py:261] Added request cmpl-d0aded78f0244965876500d99faea5be-0.
INFO 03-02 01:01:46 [logger.py:42] Received request cmpl-38a7dbcf4a884548b4332246f1316190-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:46 [async_llm.py:261] Added request cmpl-38a7dbcf4a884548b4332246f1316190-0.
INFO 03-02 01:01:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:01:47 [logger.py:42] Received request cmpl-1090c324cf254ced8208eb4b57e0c3ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:47 [async_llm.py:261] Added request cmpl-1090c324cf254ced8208eb4b57e0c3ab-0.
INFO 03-02 01:01:48 [logger.py:42] Received request cmpl-3ae9b0f28f1b465db06408dd7b6813f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:48 [async_llm.py:261] Added request cmpl-3ae9b0f28f1b465db06408dd7b6813f0-0.
INFO 03-02 01:01:49 [logger.py:42] Received request cmpl-5c5147064de04cc6a1e737124a75efad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:49 [async_llm.py:261] Added request cmpl-5c5147064de04cc6a1e737124a75efad-0.
INFO 03-02 01:01:50 [logger.py:42] Received request cmpl-6ea76f52a00f420e9d6729dc456f7080-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:50 [async_llm.py:261] Added request cmpl-6ea76f52a00f420e9d6729dc456f7080-0.
INFO 03-02 01:01:51 [logger.py:42] Received request cmpl-fb63152c51324a3da3ec4cf807cf2c72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:51 [async_llm.py:261] Added request cmpl-fb63152c51324a3da3ec4cf807cf2c72-0.
INFO 03-02 01:01:52 [logger.py:42] Received request cmpl-ec1476559d27468aa67f0489652e0125-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:52 [async_llm.py:261] Added request cmpl-ec1476559d27468aa67f0489652e0125-0.
INFO 03-02 01:01:53 [logger.py:42] Received request cmpl-c976f81e45b74cbdaf3b5911c7e96a46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:53 [async_llm.py:261] Added request cmpl-c976f81e45b74cbdaf3b5911c7e96a46-0.
INFO 03-02 01:01:54 [logger.py:42] Received request cmpl-32757d52c1504a4992c7250ef115c44d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:54 [async_llm.py:261] Added request cmpl-32757d52c1504a4992c7250ef115c44d-0.
INFO 03-02 01:01:55 [logger.py:42] Received request cmpl-1c65c8ec2edf4fb9bfcc38a6e02e2629-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:55 [async_llm.py:261] Added request cmpl-1c65c8ec2edf4fb9bfcc38a6e02e2629-0.
INFO 03-02 01:01:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:01:56 [logger.py:42] Received request cmpl-5e8bfb3924a14e03aaa23b7990ec3346-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:56 [async_llm.py:261] Added request cmpl-5e8bfb3924a14e03aaa23b7990ec3346-0.
INFO 03-02 01:01:57 [logger.py:42] Received request cmpl-978d2b46e29f482fa0a8c91ef5135d2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:57 [async_llm.py:261] Added request cmpl-978d2b46e29f482fa0a8c91ef5135d2a-0.
INFO 03-02 01:01:59 [logger.py:42] Received request cmpl-730119871e3543829ed28233b28e10d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:59 [async_llm.py:261] Added request cmpl-730119871e3543829ed28233b28e10d1-0.
INFO 03-02 01:02:00 [logger.py:42] Received request cmpl-5eb1df5494d645c0bb1b06ba4344dcf8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:00 [async_llm.py:261] Added request cmpl-5eb1df5494d645c0bb1b06ba4344dcf8-0.
INFO 03-02 01:02:01 [logger.py:42] Received request cmpl-68d7006d5ef141428ef6f879963b34b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:01 [async_llm.py:261] Added request cmpl-68d7006d5ef141428ef6f879963b34b1-0.
INFO 03-02 01:02:02 [logger.py:42] Received request cmpl-8f6c8f9bfe51433492ca4c9ed27430f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:02 [async_llm.py:261] Added request cmpl-8f6c8f9bfe51433492ca4c9ed27430f6-0.
INFO 03-02 01:02:03 [logger.py:42] Received request cmpl-0a22fba6f8544070b126e500c189c842-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:03 [async_llm.py:261] Added request cmpl-0a22fba6f8544070b126e500c189c842-0.
INFO 03-02 01:02:04 [logger.py:42] Received request cmpl-63e766141d3640b898e2f5293efd7dc2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:04 [async_llm.py:261] Added request cmpl-63e766141d3640b898e2f5293efd7dc2-0.
INFO 03-02 01:02:05 [logger.py:42] Received request cmpl-6fda31dd06f240c0b0fc8b0d3c075942-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:05 [async_llm.py:261] Added request cmpl-6fda31dd06f240c0b0fc8b0d3c075942-0.
INFO 03-02 01:02:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:02:06 [logger.py:42] Received request cmpl-c93242c2ef7c4da0824a95b6f8d60ae2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:06 [async_llm.py:261] Added request cmpl-c93242c2ef7c4da0824a95b6f8d60ae2-0.
INFO 03-02 01:02:07 [logger.py:42] Received request cmpl-498999c487f44e5c8769318b51cbaccc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:07 [async_llm.py:261] Added request cmpl-498999c487f44e5c8769318b51cbaccc-0.
INFO 03-02 01:02:08 [logger.py:42] Received request cmpl-464c9522ff2e48c9891c84ef2c8922aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:08 [async_llm.py:261] Added request cmpl-464c9522ff2e48c9891c84ef2c8922aa-0.
INFO 03-02 01:02:09 [logger.py:42] Received request cmpl-65d33bc924774bbe8468883f3b15552b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:09 [async_llm.py:261] Added request cmpl-65d33bc924774bbe8468883f3b15552b-0.
INFO 03-02 01:02:10 [logger.py:42] Received request cmpl-9469879d5beb45c78499387011c0beed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:10 [async_llm.py:261] Added request cmpl-9469879d5beb45c78499387011c0beed-0.
INFO 03-02 01:02:12 [logger.py:42] Received request cmpl-71dade166b9b42598626880490ae214c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:12 [async_llm.py:261] Added request cmpl-71dade166b9b42598626880490ae214c-0.
INFO 03-02 01:02:13 [logger.py:42] Received request cmpl-25540726d6024d36b1aafafaf60d678f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:13 [async_llm.py:261] Added request cmpl-25540726d6024d36b1aafafaf60d678f-0.
INFO 03-02 01:02:14 [logger.py:42] Received request cmpl-8094da4decb44453826687d57515d377-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:14 [async_llm.py:261] Added request cmpl-8094da4decb44453826687d57515d377-0.
INFO 03-02 01:02:15 [logger.py:42] Received request cmpl-19c1d0a3cda64a9292e6267668627917-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:15 [async_llm.py:261] Added request cmpl-19c1d0a3cda64a9292e6267668627917-0.
INFO 03-02 01:02:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:02:16 [logger.py:42] Received request cmpl-1a49a8a8915e4e7591bcae84de64fed8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:16 [async_llm.py:261] Added request cmpl-1a49a8a8915e4e7591bcae84de64fed8-0.
INFO 03-02 01:02:17 [logger.py:42] Received request cmpl-6fed53c15a64451487174e3f9be0ebd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:17 [async_llm.py:261] Added request cmpl-6fed53c15a64451487174e3f9be0ebd1-0.
INFO 03-02 01:02:18 [logger.py:42] Received request cmpl-d48fa68239cd49ac9f7b9f989ae99538-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:18 [async_llm.py:261] Added request cmpl-d48fa68239cd49ac9f7b9f989ae99538-0.
INFO 03-02 01:02:19 [logger.py:42] Received request cmpl-28e58c2b4ee3417e8814f51ed0c84718-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:19 [async_llm.py:261] Added request cmpl-28e58c2b4ee3417e8814f51ed0c84718-0.
INFO 03-02 01:02:20 [logger.py:42] Received request cmpl-5ae7d57d7b844e93ad3f64e1be10f4a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:20 [async_llm.py:261] Added request cmpl-5ae7d57d7b844e93ad3f64e1be10f4a6-0.
INFO 03-02 01:02:21 [logger.py:42] Received request cmpl-bd553e593eaf4e429033f5225070ffc0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:21 [async_llm.py:261] Added request cmpl-bd553e593eaf4e429033f5225070ffc0-0.
INFO:  1.2.3.4:123 - "POST /v1/completions HTTP/1.1" 404 Not Found
INFO 03-02 01:02:22 [logger.py:42] Received request cmpl-95baa09a3da540d6938f76163ef6ebec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:22 [async_llm.py:261] Added request cmpl-95baa09a3da540d6938f76163ef6ebec-0.
INFO 03-02 01:02:23 [logger.py:42] Received request cmpl-a6356b642ded4b55bd6e39a1963adb76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:23 [async_llm.py:261] Added request cmpl-a6356b642ded4b55bd6e39a1963adb76-0.
INFO 03-02 01:02:25 [logger.py:42] Received request cmpl-8b8c0b774e164c79928ceea97708ee4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:25 [async_llm.py:261] Added request cmpl-8b8c0b774e164c79928ceea97708ee4a-0.
INFO 03-02 01:02:26 [logger.py:42] Received request cmpl-7db13e4b7971469abf56de01c411259e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:26 [async_llm.py:261] Added request cmpl-7db13e4b7971469abf56de01c411259e-0.
INFO 03-02 01:02:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:02:27 [logger.py:42] Received request cmpl-5a7d264aff304b9680605bfb835ef4e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:27 [async_llm.py:261] Added request cmpl-5a7d264aff304b9680605bfb835ef4e6-0.
INFO 03-02 01:02:28 [logger.py:42] Received request cmpl-07394e1ca96d4d07a408f01e41c2cd29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:28 [async_llm.py:261] Added request cmpl-07394e1ca96d4d07a408f01e41c2cd29-0.
INFO 03-02 01:02:29 [logger.py:42] Received request cmpl-5941ff5897084ab58124d690f1778334-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:29 [async_llm.py:261] Added request cmpl-5941ff5897084ab58124d690f1778334-0.
INFO 03-02 01:02:30 [logger.py:42] Received request cmpl-dcc4aebcd8704456b49fc16e72eea385-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:30 [async_llm.py:261] Added request cmpl-dcc4aebcd8704456b49fc16e72eea385-0.
INFO 03-02 01:02:31 [logger.py:42] Received request cmpl-4d82bcb057bb475ba9a1858e8f1c3552-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:31 [async_llm.py:261] Added request cmpl-4d82bcb057bb475ba9a1858e8f1c3552-0.
INFO 03-02 01:02:32 [logger.py:42] Received request cmpl-9cfba44c50ad4355bdedc45bbab27460-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:32 [async_llm.py:261] Added request cmpl-9cfba44c50ad4355bdedc45bbab27460-0.
INFO 03-02 01:02:33 [logger.py:42] Received request cmpl-c330060572204f2488dbcd7a33c92c94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:33 [async_llm.py:261] Added request cmpl-c330060572204f2488dbcd7a33c92c94-0.
INFO 03-02 01:02:34 [logger.py:42] Received request cmpl-3911bd5fd9cd4c0abec8a7014d66bc8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:34 [async_llm.py:261] Added request cmpl-3911bd5fd9cd4c0abec8a7014d66bc8e-0.
INFO 03-02 01:02:35 [logger.py:42] Received request cmpl-edd478941b1742cdb87a912be4fd71fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:35 [async_llm.py:261] Added request cmpl-edd478941b1742cdb87a912be4fd71fb-0.
INFO 03-02 01:02:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:02:37 [logger.py:42] Received request cmpl-e401800e934f476789897842c5a977d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:37 [async_llm.py:261] Added request cmpl-e401800e934f476789897842c5a977d6-0.
INFO 03-02 01:02:38 [logger.py:42] Received request cmpl-74b0e74dcacb4642b88a6e9f6797c0af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:38 [async_llm.py:261] Added request cmpl-74b0e74dcacb4642b88a6e9f6797c0af-0.
INFO 03-02 01:02:39 [logger.py:42] Received request cmpl-161f4c187a334875aec54d915851fda5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:39 [async_llm.py:261] Added request cmpl-161f4c187a334875aec54d915851fda5-0.
INFO 03-02 01:02:40 [logger.py:42] Received request cmpl-c5e1bd367b734ec0886d494c3de64cff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:40 [async_llm.py:261] Added request cmpl-c5e1bd367b734ec0886d494c3de64cff-0.
INFO 03-02 01:02:41 [logger.py:42] Received request cmpl-8cf26a21063848cc9df79797b5655cdb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:41 [async_llm.py:261] Added request cmpl-8cf26a21063848cc9df79797b5655cdb-0.
INFO 03-02 01:02:42 [logger.py:42] Received request cmpl-08ef96f5c11540509e8b1704cc93d124-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:42 [async_llm.py:261] Added request cmpl-08ef96f5c11540509e8b1704cc93d124-0.
INFO 03-02 01:02:43 [logger.py:42] Received request cmpl-0dd095d46fb8427283c8f4b61582e766-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:43 [async_llm.py:261] Added request cmpl-0dd095d46fb8427283c8f4b61582e766-0.
INFO 03-02 01:02:44 [logger.py:42] Received request cmpl-f5b0271a3284407c9136eda139188d76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:44 [async_llm.py:261] Added request cmpl-f5b0271a3284407c9136eda139188d76-0.
INFO 03-02 01:02:45 [logger.py:42] Received request cmpl-032f65db401346fabbf2ab1efa046d69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:45 [async_llm.py:261] Added request cmpl-032f65db401346fabbf2ab1efa046d69-0.
INFO 03-02 01:02:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:02:46 [logger.py:42] Received request cmpl-5424fe0830d74afdae622283108e9494-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:46 [async_llm.py:261] Added request cmpl-5424fe0830d74afdae622283108e9494-0.
INFO 03-02 01:02:47 [logger.py:42] Received request cmpl-38c5cc8c27c743e4bfbefb021db419ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:47 [async_llm.py:261] Added request cmpl-38c5cc8c27c743e4bfbefb021db419ae-0.
INFO 03-02 01:02:48 [logger.py:42] Received request cmpl-4d50fbf0131843a4bdf99d9b1db312f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:48 [async_llm.py:261] Added request cmpl-4d50fbf0131843a4bdf99d9b1db312f7-0.
INFO 03-02 01:02:50 [logger.py:42] Received request cmpl-a275760496e44bc5a47cec77bbc7b925-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:50 [async_llm.py:261] Added request cmpl-a275760496e44bc5a47cec77bbc7b925-0.
INFO 03-02 01:02:51 [logger.py:42] Received request cmpl-a0d0d181831b468496d45bf7b459dc5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:51 [async_llm.py:261] Added request cmpl-a0d0d181831b468496d45bf7b459dc5b-0.
INFO 03-02 01:02:52 [logger.py:42] Received request cmpl-c6f920fcdb9f4fcd803d87a0c594d5d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:52 [async_llm.py:261] Added request cmpl-c6f920fcdb9f4fcd803d87a0c594d5d8-0.
INFO 03-02 01:02:53 [logger.py:42] Received request cmpl-47f559b72a4f4bdd8771f37ee3489a84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:53 [async_llm.py:261] Added request cmpl-47f559b72a4f4bdd8771f37ee3489a84-0.
INFO 03-02 01:02:54 [logger.py:42] Received request cmpl-c22bcb1e1ad34e9fa2978827f0733c01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:54 [async_llm.py:261] Added request cmpl-c22bcb1e1ad34e9fa2978827f0733c01-0.
INFO 03-02 01:02:55 [logger.py:42] Received request cmpl-aefe4893b2704c6d9cb5af478e321534-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:55 [async_llm.py:261] Added request cmpl-aefe4893b2704c6d9cb5af478e321534-0.
INFO 03-02 01:02:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:02:56 [logger.py:42] Received request cmpl-bfadd438204b4346950dab4f75e47386-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:56 [async_llm.py:261] Added request cmpl-bfadd438204b4346950dab4f75e47386-0.
INFO 03-02 01:02:57 [logger.py:42] Received request cmpl-499e671922c445d7bb00b5b86e167241-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:57 [async_llm.py:261] Added request cmpl-499e671922c445d7bb00b5b86e167241-0.
INFO 03-02 01:02:58 [logger.py:42] Received request cmpl-d29cff8e4f524c558472540b25c7e250-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:58 [async_llm.py:261] Added request cmpl-d29cff8e4f524c558472540b25c7e250-0.
INFO 03-02 01:02:59 [logger.py:42] Received request cmpl-a56fd03faae04933b94e1392cdbdec77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:59 [async_llm.py:261] Added request cmpl-a56fd03faae04933b94e1392cdbdec77-0.
INFO 03-02 01:03:00 [logger.py:42] Received request cmpl-04a22daf38074f49bad80c62c8913097-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:00 [async_llm.py:261] Added request cmpl-04a22daf38074f49bad80c62c8913097-0.
INFO 03-02 01:03:01 [logger.py:42] Received request cmpl-71cc863227bc452090579fcab2f211f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:01 [async_llm.py:261] Added request cmpl-71cc863227bc452090579fcab2f211f7-0.
INFO 03-02 01:03:03 [logger.py:42] Received request cmpl-7ca5bb089d98482e8d7be8c52bdd41aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:03 [async_llm.py:261] Added request cmpl-7ca5bb089d98482e8d7be8c52bdd41aa-0.
INFO 03-02 01:03:04 [logger.py:42] Received request cmpl-c8571204dc624c38a46fffcbeb06d6a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:04 [async_llm.py:261] Added request cmpl-c8571204dc624c38a46fffcbeb06d6a8-0.
INFO 03-02 01:03:05 [logger.py:42] Received request cmpl-05fe6b2a1d274b6cad3e5795df358ef1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:05 [async_llm.py:261] Added request cmpl-05fe6b2a1d274b6cad3e5795df358ef1-0.
INFO 03-02 01:03:06 [logger.py:42] Received request cmpl-77724ca000984273872b1b59636d9d78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:06 [async_llm.py:261] Added request cmpl-77724ca000984273872b1b59636d9d78-0.
INFO 03-02 01:03:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:03:07 [logger.py:42] Received request cmpl-bdda58f9df284c67b6d4dcb437ac9aa0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:07 [async_llm.py:261] Added request cmpl-bdda58f9df284c67b6d4dcb437ac9aa0-0.
INFO 03-02 01:03:08 [logger.py:42] Received request cmpl-4c970d3356c14ad0b8b66523566573a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:08 [async_llm.py:261] Added request cmpl-4c970d3356c14ad0b8b66523566573a9-0.
INFO 03-02 01:03:09 [logger.py:42] Received request cmpl-29c2644da9474f90b10794473c336115-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:09 [async_llm.py:261] Added request cmpl-29c2644da9474f90b10794473c336115-0.
INFO 03-02 01:03:10 [logger.py:42] Received request cmpl-6f9b0e4bb62f461bb53c18251a5e4a7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:10 [async_llm.py:261] Added request cmpl-6f9b0e4bb62f461bb53c18251a5e4a7f-0.
INFO 03-02 01:03:11 [logger.py:42] Received request cmpl-7746c6471100467e8e8215d89ba92bbf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:11 [async_llm.py:261] Added request cmpl-7746c6471100467e8e8215d89ba92bbf-0.
INFO 03-02 01:03:12 [logger.py:42] Received request cmpl-b98ac44d280a4c90b15da38b900fef02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:12 [async_llm.py:261] Added request cmpl-b98ac44d280a4c90b15da38b900fef02-0.
INFO 03-02 01:03:13 [logger.py:42] Received request cmpl-9a40aa6849f9471fa127a8c626f4ead0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:13 [async_llm.py:261] Added request cmpl-9a40aa6849f9471fa127a8c626f4ead0-0.
INFO 03-02 01:03:14 [logger.py:42] Received request cmpl-8fcca35864c64c6cb0267b2337a04ac4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:14 [async_llm.py:261] Added request cmpl-8fcca35864c64c6cb0267b2337a04ac4-0.
INFO 03-02 01:03:16 [logger.py:42] Received request cmpl-13cbfa2004b444308e4c1ec48f50399e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:16 [async_llm.py:261] Added request cmpl-13cbfa2004b444308e4c1ec48f50399e-0.
INFO 03-02 01:03:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:03:17 [logger.py:42] Received request cmpl-c3783d65f3874c9a989e4ce7fdb329eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:17 [async_llm.py:261] Added request cmpl-c3783d65f3874c9a989e4ce7fdb329eb-0.
INFO:  1.2.3.4:123 - "POST /v1/completions HTTP/1.1" 404 Not Found
INFO 03-02 01:03:18 [logger.py:42] Received request cmpl-41ac149d82fc44b987eb304cab52b5f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:18 [async_llm.py:261] Added request cmpl-41ac149d82fc44b987eb304cab52b5f1-0.
INFO 03-02 01:03:19 [logger.py:42] Received request cmpl-c0178ee8fb0d4029a4b8c0021e223199-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:19 [async_llm.py:261] Added request cmpl-c0178ee8fb0d4029a4b8c0021e223199-0.
INFO 03-02 01:03:20 [logger.py:42] Received request cmpl-58267657755f4329a1f0ceef4c4140b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:20 [async_llm.py:261] Added request cmpl-58267657755f4329a1f0ceef4c4140b7-0.
INFO 03-02 01:03:21 [logger.py:42] Received request cmpl-352ecdbc477145d0a8481b590e128712-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:21 [async_llm.py:261] Added request cmpl-352ecdbc477145d0a8481b590e128712-0.
INFO 03-02 01:03:22 [logger.py:42] Received request cmpl-b6cec22f47524201b90379096d7eef2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:22 [async_llm.py:261] Added request cmpl-b6cec22f47524201b90379096d7eef2a-0.
INFO 03-02 01:03:23 [logger.py:42] Received request cmpl-ad5969e814424dd49d6b5f7b5d143327-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:23 [async_llm.py:261] Added request cmpl-ad5969e814424dd49d6b5f7b5d143327-0.
INFO 03-02 01:03:24 [logger.py:42] Received request cmpl-d7a9d701756c4b19904c4b810c78df8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:24 [async_llm.py:261] Added request cmpl-d7a9d701756c4b19904c4b810c78df8c-0.
INFO 03-02 01:03:25 [logger.py:42] Received request cmpl-2a80d745ad334657b6b3e54289c839b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:25 [async_llm.py:261] Added request cmpl-2a80d745ad334657b6b3e54289c839b3-0.
INFO 03-02 01:03:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:03:26 [logger.py:42] Received request cmpl-a11cc465d4fa427fb8948dae6dedf396-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:26 [async_llm.py:261] Added request cmpl-a11cc465d4fa427fb8948dae6dedf396-0.
INFO 03-02 01:03:27 [logger.py:42] Received request cmpl-d6680e43336d4873bc83cfcc82f086c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:27 [async_llm.py:261] Added request cmpl-d6680e43336d4873bc83cfcc82f086c2-0.
INFO 03-02 01:03:29 [logger.py:42] Received request cmpl-eb4465cfd4064a519d33d1f3a070d688-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:29 [async_llm.py:261] Added request cmpl-eb4465cfd4064a519d33d1f3a070d688-0.
INFO 03-02 01:03:30 [logger.py:42] Received request cmpl-64951ab624f9492b81c0b1a915311d2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:30 [async_llm.py:261] Added request cmpl-64951ab624f9492b81c0b1a915311d2a-0.
INFO 03-02 01:03:31 [logger.py:42] Received request cmpl-3cb52967b3764905a7b86bf91b11192d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:31 [async_llm.py:261] Added request cmpl-3cb52967b3764905a7b86bf91b11192d-0.
INFO 03-02 01:03:32 [logger.py:42] Received request cmpl-a3b26ad507f84d5b8300045a1ce3f1d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:32 [async_llm.py:261] Added request cmpl-a3b26ad507f84d5b8300045a1ce3f1d4-0.
INFO 03-02 01:03:33 [logger.py:42] Received request cmpl-f8392ef3eeed404f99e918e4bd3f3ecb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:33 [async_llm.py:261] Added request cmpl-f8392ef3eeed404f99e918e4bd3f3ecb-0.
INFO 03-02 01:03:34 [logger.py:42] Received request cmpl-d6a034c3f9ee43bf89d6a5b9cc713e68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:34 [async_llm.py:261] Added request cmpl-d6a034c3f9ee43bf89d6a5b9cc713e68-0.
INFO 03-02 01:03:35 [logger.py:42] Received request cmpl-785e03f9b64746529a215a9dc8e5c252-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:35 [async_llm.py:261] Added request cmpl-785e03f9b64746529a215a9dc8e5c252-0.
INFO 03-02 01:03:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:03:36 [logger.py:42] Received request cmpl-3fc192044a8040449111d5cf8c7e885f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:36 [async_llm.py:261] Added request cmpl-3fc192044a8040449111d5cf8c7e885f-0.
INFO 03-02 01:03:37 [logger.py:42] Received request cmpl-4f07357d22eb4d508df92aadfd6622c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:37 [async_llm.py:261] Added request cmpl-4f07357d22eb4d508df92aadfd6622c6-0.
INFO 03-02 01:03:38 [logger.py:42] Received request cmpl-8a1670c174f4433fb705d1305facde18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:38 [async_llm.py:261] Added request cmpl-8a1670c174f4433fb705d1305facde18-0.
INFO 03-02 01:03:39 [logger.py:42] Received request cmpl-974c0b77d8494214a2286be1ef685c7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:39 [async_llm.py:261] Added request cmpl-974c0b77d8494214a2286be1ef685c7d-0.
INFO 03-02 01:03:40 [logger.py:42] Received request cmpl-606357ff71364b0aae45849613345105-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:40 [async_llm.py:261] Added request cmpl-606357ff71364b0aae45849613345105-0.
INFO 03-02 01:03:42 [logger.py:42] Received request cmpl-00516a2eab97400f8181796bebab2b9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:42 [async_llm.py:261] Added request cmpl-00516a2eab97400f8181796bebab2b9d-0.
INFO 03-02 01:03:43 [logger.py:42] Received request cmpl-55e9b072a3554050b66b4d0601fffdb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:43 [async_llm.py:261] Added request cmpl-55e9b072a3554050b66b4d0601fffdb2-0.
INFO 03-02 01:03:44 [logger.py:42] Received request cmpl-87a2715ae4444d1e9dd4551562e05ca7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:44 [async_llm.py:261] Added request cmpl-87a2715ae4444d1e9dd4551562e05ca7-0.
INFO 03-02 01:03:45 [logger.py:42] Received request cmpl-abbc8a29a5d64a66bced68eb22e44643-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:45 [async_llm.py:261] Added request cmpl-abbc8a29a5d64a66bced68eb22e44643-0.
INFO 03-02 01:03:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:03:46 [logger.py:42] Received request cmpl-0c406aed2ac54b369e06a4ab6a4db15e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:46 [async_llm.py:261] Added request cmpl-0c406aed2ac54b369e06a4ab6a4db15e-0.
INFO 03-02 01:03:47 [logger.py:42] Received request cmpl-83009d2179ed4fa48bcb56d4240b9f31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:47 [async_llm.py:261] Added request cmpl-83009d2179ed4fa48bcb56d4240b9f31-0.
INFO 03-02 01:03:48 [logger.py:42] Received request cmpl-35591c7e6da04e048e9eccde3fff789a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:48 [async_llm.py:261] Added request cmpl-35591c7e6da04e048e9eccde3fff789a-0.
INFO 03-02 01:03:49 [logger.py:42] Received request cmpl-5936470f29ab4a8ca8fdedb9bc45f64f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:49 [async_llm.py:261] Added request cmpl-5936470f29ab4a8ca8fdedb9bc45f64f-0.
INFO 03-02 01:03:50 [logger.py:42] Received request cmpl-3fe4d7ebf9814b36af7fc84f7dc6f3a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:50 [async_llm.py:261] Added request cmpl-3fe4d7ebf9814b36af7fc84f7dc6f3a6-0.
INFO 03-02 01:03:51 [logger.py:42] Received request cmpl-2a3a1169746d4fa982f1e26aa640d739-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:51 [async_llm.py:261] Added request cmpl-2a3a1169746d4fa982f1e26aa640d739-0.
INFO 03-02 01:03:52 [logger.py:42] Received request cmpl-bd361e47bb3a42c09d20cd8c6f96f406-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:52 [async_llm.py:261] Added request cmpl-bd361e47bb3a42c09d20cd8c6f96f406-0.
INFO 03-02 01:03:53 [logger.py:42] Received request cmpl-51fadbb71adf40bbbab4505af81f6eb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:53 [async_llm.py:261] Added request cmpl-51fadbb71adf40bbbab4505af81f6eb2-0.
INFO 03-02 01:03:55 [logger.py:42] Received request cmpl-4c2766cca6d744ef95091819d5a91c6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:55 [async_llm.py:261] Added request cmpl-4c2766cca6d744ef95091819d5a91c6e-0.
INFO 03-02 01:03:56 [logger.py:42] Received request cmpl-cafc75e369ea4756ace16108363108bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:56 [async_llm.py:261] Added request cmpl-cafc75e369ea4756ace16108363108bd-0.
INFO 03-02 01:03:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:03:57 [logger.py:42] Received request cmpl-57caf0857fe04762a19ca46e2c716126-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:57 [async_llm.py:261] Added request cmpl-57caf0857fe04762a19ca46e2c716126-0.
INFO 03-02 01:03:58 [logger.py:42] Received request cmpl-4618d2b171e149f9af52901c8086a9fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:58 [async_llm.py:261] Added request cmpl-4618d2b171e149f9af52901c8086a9fd-0.
INFO 03-02 01:03:59 [logger.py:42] Received request cmpl-85831be4720b4841816929786ff9be41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:59 [async_llm.py:261] Added request cmpl-85831be4720b4841816929786ff9be41-0.
INFO 03-02 01:04:00 [logger.py:42] Received request cmpl-3bc39869183c4871ace34f1a0e329e45-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:00 [async_llm.py:261] Added request cmpl-3bc39869183c4871ace34f1a0e329e45-0.
INFO 03-02 01:04:01 [logger.py:42] Received request cmpl-ab4121d6a21c4d708f3d84522566b0a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:01 [async_llm.py:261] Added request cmpl-ab4121d6a21c4d708f3d84522566b0a0-0.
INFO:  1.2.3.4:123 - "POST /v1/completions HTTP/1.1" 404 Not Found
INFO 03-02 01:04:02 [logger.py:42] Received request cmpl-bca06f779eaf487b921950bc810966af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:02 [async_llm.py:261] Added request cmpl-bca06f779eaf487b921950bc810966af-0.
INFO 03-02 01:04:03 [logger.py:42] Received request cmpl-7f59488670f54971950b49fd21e0dff3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:03 [async_llm.py:261] Added request cmpl-7f59488670f54971950b49fd21e0dff3-0.
INFO 03-02 01:04:04 [logger.py:42] Received request cmpl-fc0d8fff043b4227b9ab574d7f2a7d00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:04 [async_llm.py:261] Added request cmpl-fc0d8fff043b4227b9ab574d7f2a7d00-0.
INFO 03-02 01:04:05 [logger.py:42] Received request cmpl-149919bdb47f41d7b662afb81846bcb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:05 [async_llm.py:261] Added request cmpl-149919bdb47f41d7b662afb81846bcb0-0.
INFO 03-02 01:04:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:04:06 [logger.py:42] Received request cmpl-e4e29fbf54d842c599faf4736ac592a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:06 [async_llm.py:261] Added request cmpl-e4e29fbf54d842c599faf4736ac592a5-0.
INFO 03-02 01:04:08 [logger.py:42] Received request cmpl-f11f0cb7c24d4ee596bf6d88c5155e76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:08 [async_llm.py:261] Added request cmpl-f11f0cb7c24d4ee596bf6d88c5155e76-0.
INFO 03-02 01:04:09 [logger.py:42] Received request cmpl-16a9435e441a42d99bd137a42277b29a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:09 [async_llm.py:261] Added request cmpl-16a9435e441a42d99bd137a42277b29a-0.
INFO 03-02 01:04:10 [logger.py:42] Received request cmpl-22062b1fdc8a405892fa5260cb06913e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:10 [async_llm.py:261] Added request cmpl-22062b1fdc8a405892fa5260cb06913e-0.
INFO 03-02 01:04:11 [logger.py:42] Received request cmpl-7c8c1f6edcd949cc9b4e289018442437-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:11 [async_llm.py:261] Added request cmpl-7c8c1f6edcd949cc9b4e289018442437-0.
INFO 03-02 01:04:12 [logger.py:42] Received request cmpl-33fd7e049cdd45ef970da1d2089b4656-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:12 [async_llm.py:261] Added request cmpl-33fd7e049cdd45ef970da1d2089b4656-0.
INFO 03-02 01:04:13 [logger.py:42] Received request cmpl-5b153e201b344d10be4ff0abcd07a92e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:13 [async_llm.py:261] Added request cmpl-5b153e201b344d10be4ff0abcd07a92e-0.
INFO 03-02 01:04:14 [logger.py:42] Received request cmpl-e7a19eb9149047a58b7551831643d094-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:14 [async_llm.py:261] Added request cmpl-e7a19eb9149047a58b7551831643d094-0.
INFO 03-02 01:04:15 [logger.py:42] Received request cmpl-56b77e8c12a04fcf985b7bb801eb2eb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:15 [async_llm.py:261] Added request cmpl-56b77e8c12a04fcf985b7bb801eb2eb3-0.
INFO 03-02 01:04:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:04:16 [logger.py:42] Received request cmpl-711b598aeaa94b96ada1722e8e317ee5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:16 [async_llm.py:261] Added request cmpl-711b598aeaa94b96ada1722e8e317ee5-0.
INFO 03-02 01:04:17 [logger.py:42] Received request cmpl-c518c6268f994d0fbc7a1259f97b4609-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:17 [async_llm.py:261] Added request cmpl-c518c6268f994d0fbc7a1259f97b4609-0.
INFO 03-02 01:04:18 [logger.py:42] Received request cmpl-1ffe2845629f4d18a4d6fd3f280f895b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:18 [async_llm.py:261] Added request cmpl-1ffe2845629f4d18a4d6fd3f280f895b-0.
INFO 03-02 01:04:20 [logger.py:42] Received request cmpl-6772de0cd3d147c1928fae4ee1aeb1be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:20 [async_llm.py:261] Added request cmpl-6772de0cd3d147c1928fae4ee1aeb1be-0.
INFO 03-02 01:04:21 [logger.py:42] Received request cmpl-b8bceeb8d5664b8b823cbff391199390-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:21 [async_llm.py:261] Added request cmpl-b8bceeb8d5664b8b823cbff391199390-0.
INFO 03-02 01:04:22 [logger.py:42] Received request cmpl-f3f0c5cd39604354a7fadbd0409c9864-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:22 [async_llm.py:261] Added request cmpl-f3f0c5cd39604354a7fadbd0409c9864-0.
INFO 03-02 01:04:23 [logger.py:42] Received request cmpl-d8290631c0b449f694f4bda06d9f23cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:23 [async_llm.py:261] Added request cmpl-d8290631c0b449f694f4bda06d9f23cb-0.
INFO 03-02 01:04:24 [logger.py:42] Received request cmpl-7c2fe3b8f8904730bd06517b721da38a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:24 [async_llm.py:261] Added request cmpl-7c2fe3b8f8904730bd06517b721da38a-0.
INFO 03-02 01:04:25 [logger.py:42] Received request cmpl-b84e1a1f31a8442f96ad11311ab678eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:25 [async_llm.py:261] Added request cmpl-b84e1a1f31a8442f96ad11311ab678eb-0.
INFO 03-02 01:04:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:04:26 [logger.py:42] Received request cmpl-f063afc90f80469fb5e53269d8e6d04b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:26 [async_llm.py:261] Added request cmpl-f063afc90f80469fb5e53269d8e6d04b-0.
INFO 03-02 01:04:27 [logger.py:42] Received request cmpl-e242caa18f704a09ae3374df83f04bb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:27 [async_llm.py:261] Added request cmpl-e242caa18f704a09ae3374df83f04bb0-0.
INFO 03-02 01:04:28 [logger.py:42] Received request cmpl-f76b9d39924c4a9181e47412396dc7e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:28 [async_llm.py:261] Added request cmpl-f76b9d39924c4a9181e47412396dc7e8-0.
INFO 03-02 01:04:29 [logger.py:42] Received request cmpl-d3c6f194a7c749018eedf7c115388483-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:29 [async_llm.py:261] Added request cmpl-d3c6f194a7c749018eedf7c115388483-0.
INFO 03-02 01:04:30 [logger.py:42] Received request cmpl-c61cdbb912524c4c8e28b48697a2a654-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:30 [async_llm.py:261] Added request cmpl-c61cdbb912524c4c8e28b48697a2a654-0.
INFO 03-02 01:04:31 [logger.py:42] Received request cmpl-23b4622a6bbf43aeacfc3c5975a4c784-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:31 [async_llm.py:261] Added request cmpl-23b4622a6bbf43aeacfc3c5975a4c784-0.
INFO 03-02 01:04:33 [logger.py:42] Received request cmpl-1b21717572fc4e3792917802d486f02c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:33 [async_llm.py:261] Added request cmpl-1b21717572fc4e3792917802d486f02c-0.
INFO 03-02 01:04:34 [logger.py:42] Received request cmpl-e8a464a9e36d45c2a8ebe5a0af1da3b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:34 [async_llm.py:261] Added request cmpl-e8a464a9e36d45c2a8ebe5a0af1da3b6-0.
INFO 03-02 01:04:35 [logger.py:42] Received request cmpl-a84a028116dd43fab5850a6ba1a3fb30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:35 [async_llm.py:261] Added request cmpl-a84a028116dd43fab5850a6ba1a3fb30-0.
INFO 03-02 01:04:36 [logger.py:42] Received request cmpl-20fccec1b824405db430406169b53a67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:36 [async_llm.py:261] Added request cmpl-20fccec1b824405db430406169b53a67-0.
INFO 03-02 01:04:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:04:37 [logger.py:42] Received request cmpl-dcfefc5353234747b24e36c2e555c5e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:37 [async_llm.py:261] Added request cmpl-dcfefc5353234747b24e36c2e555c5e2-0.
INFO 03-02 01:04:38 [logger.py:42] Received request cmpl-dae07ba94adc43869de1d74ce4688e69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:38 [async_llm.py:261] Added request cmpl-dae07ba94adc43869de1d74ce4688e69-0.
INFO 03-02 01:04:39 [logger.py:42] Received request cmpl-cafea462b5ac4d06ad922dce63772aaa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:39 [async_llm.py:261] Added request cmpl-cafea462b5ac4d06ad922dce63772aaa-0.
INFO 03-02 01:04:40 [logger.py:42] Received request cmpl-983a362af2ee4f5491ea4af8d978b81e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:40 [async_llm.py:261] Added request cmpl-983a362af2ee4f5491ea4af8d978b81e-0.
INFO 03-02 01:04:41 [logger.py:42] Received request cmpl-2f00a8499b4b4f1db692bd748110bd7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:41 [async_llm.py:261] Added request cmpl-2f00a8499b4b4f1db692bd748110bd7b-0.
INFO 03-02 01:04:42 [logger.py:42] Received request cmpl-a675af3e4b7047eebdd9ef382966b09d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:42 [async_llm.py:261] Added request cmpl-a675af3e4b7047eebdd9ef382966b09d-0.
INFO 03-02 01:04:43 [logger.py:42] Received request cmpl-6f4f61ce0c26449baf28d1621a072e39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:43 [async_llm.py:261] Added request cmpl-6f4f61ce0c26449baf28d1621a072e39-0.
INFO 03-02 01:04:44 [logger.py:42] Received request cmpl-f2f038f03e304bf6b78f388cfc59c2a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:44 [async_llm.py:261] Added request cmpl-f2f038f03e304bf6b78f388cfc59c2a7-0.
INFO 03-02 01:04:46 [logger.py:42] Received request cmpl-1b9e239a39274ce99e047b6dcc48f8bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:46 [async_llm.py:261] Added request cmpl-1b9e239a39274ce99e047b6dcc48f8bd-0.
INFO 03-02 01:04:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:04:47 [logger.py:42] Received request cmpl-b3fc22a9509c4852bc9e539c669c491d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:47 [async_llm.py:261] Added request cmpl-b3fc22a9509c4852bc9e539c669c491d-0.
INFO 03-02 01:04:48 [logger.py:42] Received request cmpl-3bad2d89574b43f386e4f3e868f3a187-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:48 [async_llm.py:261] Added request cmpl-3bad2d89574b43f386e4f3e868f3a187-0.
INFO 03-02 01:04:49 [logger.py:42] Received request cmpl-54e27fa39e194c739ee72ce639b6e9c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:49 [async_llm.py:261] Added request cmpl-54e27fa39e194c739ee72ce639b6e9c1-0.
INFO 03-02 01:04:50 [logger.py:42] Received request cmpl-1041d686127240928fd510f9f19716c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:50 [async_llm.py:261] Added request cmpl-1041d686127240928fd510f9f19716c9-0.
INFO 03-02 01:04:51 [logger.py:42] Received request cmpl-2d671d4b39b14b9cb69adeee42d64327-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:51 [async_llm.py:261] Added request cmpl-2d671d4b39b14b9cb69adeee42d64327-0.
INFO 03-02 01:04:52 [logger.py:42] Received request cmpl-8fff8d8bc6aa417593cce5f16c1d9587-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:52 [async_llm.py:261] Added request cmpl-8fff8d8bc6aa417593cce5f16c1d9587-0.
INFO 03-02 01:04:53 [logger.py:42] Received request cmpl-55290521a7034bc1b21b8983473ccc0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:53 [async_llm.py:261] Added request cmpl-55290521a7034bc1b21b8983473ccc0e-0.
INFO 03-02 01:04:54 [logger.py:42] Received request cmpl-d1b90de8fc8f4e01a0bfb27b204a4953-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:54 [async_llm.py:261] Added request cmpl-d1b90de8fc8f4e01a0bfb27b204a4953-0.
INFO 03-02 01:04:55 [logger.py:42] Received request cmpl-578b0434aadc402a9138c7b5f63e4445-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:55 [async_llm.py:261] Added request cmpl-578b0434aadc402a9138c7b5f63e4445-0.
INFO 03-02 01:04:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:04:56 [logger.py:42] Received request cmpl-b045ba017a88494686da4239de0fc47f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:56 [async_llm.py:261] Added request cmpl-b045ba017a88494686da4239de0fc47f-0.
INFO 03-02 01:04:57 [logger.py:42] Received request cmpl-ee9710dca66d45efa5658b698aef35a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:57 [async_llm.py:261] Added request cmpl-ee9710dca66d45efa5658b698aef35a7-0.
INFO 03-02 01:04:59 [logger.py:42] Received request cmpl-635ca0b91832451bb4ab9fd408874f27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:59 [async_llm.py:261] Added request cmpl-635ca0b91832451bb4ab9fd408874f27-0.
INFO 03-02 01:05:00 [logger.py:42] Received request cmpl-4be8fe152809403e913a197799fbf407-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:00 [async_llm.py:261] Added request cmpl-4be8fe152809403e913a197799fbf407-0.
INFO 03-02 01:05:01 [logger.py:42] Received request cmpl-7d8a15e09b2246bca3e4b72b0130c114-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:01 [async_llm.py:261] Added request cmpl-7d8a15e09b2246bca3e4b72b0130c114-0.
INFO 03-02 01:05:02 [logger.py:42] Received request cmpl-5e44e61efee9475b89ed5da8869cc942-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:02 [async_llm.py:261] Added request cmpl-5e44e61efee9475b89ed5da8869cc942-0.
INFO 03-02 01:05:03 [logger.py:42] Received request cmpl-c849bcec3c6143d38a1fac7ce5a1411c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:03 [async_llm.py:261] Added request cmpl-c849bcec3c6143d38a1fac7ce5a1411c-0.
INFO 03-02 01:05:04 [logger.py:42] Received request cmpl-293b2a009b754ba081c8e613df031da6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:04 [async_llm.py:261] Added request cmpl-293b2a009b754ba081c8e613df031da6-0.
INFO 03-02 01:05:05 [logger.py:42] Received request cmpl-1874c51ccdb74578ac16ad6e6e17f73c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:05 [async_llm.py:261] Added request cmpl-1874c51ccdb74578ac16ad6e6e17f73c-0.
INFO 03-02 01:05:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:05:06 [logger.py:42] Received request cmpl-96a7a4213e214be0a8daa54b0570f6ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:06 [async_llm.py:261] Added request cmpl-96a7a4213e214be0a8daa54b0570f6ca-0.
INFO 03-02 01:05:07 [logger.py:42] Received request cmpl-39a637716c4e400e890387dedbc533ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:07 [async_llm.py:261] Added request cmpl-39a637716c4e400e890387dedbc533ba-0.
INFO 03-02 01:05:08 [logger.py:42] Received request cmpl-d42b3d1bbcc04fa1b000b047967a40e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:08 [async_llm.py:261] Added request cmpl-d42b3d1bbcc04fa1b000b047967a40e2-0.
INFO 03-02 01:05:09 [logger.py:42] Received request cmpl-2370b2b390074fb6b95d6aa742cf837a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:09 [async_llm.py:261] Added request cmpl-2370b2b390074fb6b95d6aa742cf837a-0.
INFO 03-02 01:05:10 [logger.py:42] Received request cmpl-420145221ccb4f95938f75f955760259-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:10 [async_llm.py:261] Added request cmpl-420145221ccb4f95938f75f955760259-0.
INFO 03-02 01:05:12 [logger.py:42] Received request cmpl-19e363e4bc2f4555980dcaed2d574618-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:12 [async_llm.py:261] Added request cmpl-19e363e4bc2f4555980dcaed2d574618-0.
INFO 03-02 01:05:13 [logger.py:42] Received request cmpl-6de003bead744bea9edd2020a66515cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:13 [async_llm.py:261] Added request cmpl-6de003bead744bea9edd2020a66515cf-0.
INFO 03-02 01:05:14 [logger.py:42] Received request cmpl-e59ce39c5cfa46cdb5bb1ddb24051a0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:14 [async_llm.py:261] Added request cmpl-e59ce39c5cfa46cdb5bb1ddb24051a0c-0.
INFO 03-02 01:05:15 [logger.py:42] Received request cmpl-9921b743415549428e3d4bd57d2c8684-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:15 [async_llm.py:261] Added request cmpl-9921b743415549428e3d4bd57d2c8684-0.
INFO 03-02 01:05:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:05:16 [logger.py:42] Received request cmpl-0635f41055af4628a6c9e9bb3d8ff674-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:16 [async_llm.py:261] Added request cmpl-0635f41055af4628a6c9e9bb3d8ff674-0.
INFO 03-02 01:05:17 [logger.py:42] Received request cmpl-30a14aeabc584a319065fbadbbd75199-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:17 [async_llm.py:261] Added request cmpl-30a14aeabc584a319065fbadbbd75199-0.
INFO 03-02 01:05:18 [logger.py:42] Received request cmpl-fda7a1eb96fc40e0a6253bdec313c1c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:18 [async_llm.py:261] Added request cmpl-fda7a1eb96fc40e0a6253bdec313c1c4-0.
INFO 03-02 01:05:19 [logger.py:42] Received request cmpl-17dc749068a04030871ee7549f676709-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:19 [async_llm.py:261] Added request cmpl-17dc749068a04030871ee7549f676709-0.
INFO 03-02 01:05:20 [logger.py:42] Received request cmpl-f2a125747c1e456ea88c02249f43d1e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:20 [async_llm.py:261] Added request cmpl-f2a125747c1e456ea88c02249f43d1e4-0.
INFO 03-02 01:05:21 [logger.py:42] Received request cmpl-e379e07ea2424a888a69a75b4d036fae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:21 [async_llm.py:261] Added request cmpl-e379e07ea2424a888a69a75b4d036fae-0.
INFO 03-02 01:05:22 [logger.py:42] Received request cmpl-d452d816be884a1882d63cf654034efc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:22 [async_llm.py:261] Added request cmpl-d452d816be884a1882d63cf654034efc-0.
INFO 03-02 01:05:23 [logger.py:42] Received request cmpl-f8fceceace2f4b73ada1cfd69ab27d5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:23 [async_llm.py:261] Added request cmpl-f8fceceace2f4b73ada1cfd69ab27d5b-0.
INFO 03-02 01:05:25 [logger.py:42] Received request cmpl-1bf3d29ea6454258bbf07a8dec356a26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:25 [async_llm.py:261] Added request cmpl-1bf3d29ea6454258bbf07a8dec356a26-0.
INFO 03-02 01:05:26 [logger.py:42] Received request cmpl-d4c984a74ce44682bbf415ceea4e90ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:26 [async_llm.py:261] Added request cmpl-d4c984a74ce44682bbf415ceea4e90ff-0.
INFO 03-02 01:05:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:05:27 [logger.py:42] Received request cmpl-e595d343c8994d3aaf6584f80c934f27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:27 [async_llm.py:261] Added request cmpl-e595d343c8994d3aaf6584f80c934f27-0.
INFO 03-02 01:05:28 [logger.py:42] Received request cmpl-c3bd8e581eaf4bc5a416df5068e768d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:28 [async_llm.py:261] Added request cmpl-c3bd8e581eaf4bc5a416df5068e768d6-0.
INFO 03-02 01:05:29 [logger.py:42] Received request cmpl-5a45c72bb47547cba00db84788afe696-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:29 [async_llm.py:261] Added request cmpl-5a45c72bb47547cba00db84788afe696-0.
INFO 03-02 01:05:30 [logger.py:42] Received request cmpl-709ee371bf704433af546572e365151c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:30 [async_llm.py:261] Added request cmpl-709ee371bf704433af546572e365151c-0.
INFO 03-02 01:05:31 [logger.py:42] Received request cmpl-bcbc89dfdbcb439a9beff8d341745c49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:31 [async_llm.py:261] Added request cmpl-bcbc89dfdbcb439a9beff8d341745c49-0.
INFO 03-02 01:05:32 [logger.py:42] Received request cmpl-2378aaa486ee4c7595be439cfad70d53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:32 [async_llm.py:261] Added request cmpl-2378aaa486ee4c7595be439cfad70d53-0.
INFO 03-02 01:05:33 [logger.py:42] Received request cmpl-4501155323434936b46c8e889a7fc4c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:33 [async_llm.py:261] Added request cmpl-4501155323434936b46c8e889a7fc4c7-0.
INFO 03-02 01:05:34 [logger.py:42] Received request cmpl-a60ab0c6d6e741f5abc97a48f68b2764-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:34 [async_llm.py:261] Added request cmpl-a60ab0c6d6e741f5abc97a48f68b2764-0.
INFO 03-02 01:05:35 [logger.py:42] Received request cmpl-2ed5662cfc754be19a834f865fb30e25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:35 [async_llm.py:261] Added request cmpl-2ed5662cfc754be19a834f865fb30e25-0.
INFO 03-02 01:05:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:05:36 [logger.py:42] Received request cmpl-276d9d11fa984fb5b19112117525f245-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:36 [async_llm.py:261] Added request cmpl-276d9d11fa984fb5b19112117525f245-0.
INFO 03-02 01:05:38 [logger.py:42] Received request cmpl-8b59ef4a30f94b86a55e2eea6134bfe2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:38 [async_llm.py:261] Added request cmpl-8b59ef4a30f94b86a55e2eea6134bfe2-0.
INFO 03-02 01:05:39 [logger.py:42] Received request cmpl-204830bb121047d79f003466d556b714-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:39 [async_llm.py:261] Added request cmpl-204830bb121047d79f003466d556b714-0.
INFO 03-02 01:05:40 [logger.py:42] Received request cmpl-19d9ab2f99884c15b9ef6a9e4424b09d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:40 [async_llm.py:261] Added request cmpl-19d9ab2f99884c15b9ef6a9e4424b09d-0.
INFO 03-02 01:05:41 [logger.py:42] Received request cmpl-7985854e392444ddaa16e6aa299c44a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:41 [async_llm.py:261] Added request cmpl-7985854e392444ddaa16e6aa299c44a5-0.
INFO 03-02 01:05:42 [logger.py:42] Received request cmpl-a9d821ad14f14b898e969da5f125bab9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:42 [async_llm.py:261] Added request cmpl-a9d821ad14f14b898e969da5f125bab9-0.
INFO 03-02 01:05:43 [logger.py:42] Received request cmpl-57732281d5cb4eebbe9392c3bf4e77e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:43 [async_llm.py:261] Added request cmpl-57732281d5cb4eebbe9392c3bf4e77e4-0.
INFO 03-02 01:05:44 [logger.py:42] Received request cmpl-5b0c84b7d01744289b3aa900b88b4047-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:44 [async_llm.py:261] Added request cmpl-5b0c84b7d01744289b3aa900b88b4047-0.
INFO 03-02 01:05:45 [logger.py:42] Received request cmpl-200ee0b80732499cb548cbdeb89c3b23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:45 [async_llm.py:261] Added request cmpl-200ee0b80732499cb548cbdeb89c3b23-0.
INFO 03-02 01:05:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:05:46 [logger.py:42] Received request cmpl-926a2ddab5904fa9969b745e39ad0569-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:46 [async_llm.py:261] Added request cmpl-926a2ddab5904fa9969b745e39ad0569-0.
INFO 03-02 01:05:47 [logger.py:42] Received request cmpl-8a3865cc21db4ee9a18b2001a8503145-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:47 [async_llm.py:261] Added request cmpl-8a3865cc21db4ee9a18b2001a8503145-0.
INFO 03-02 01:05:48 [logger.py:42] Received request cmpl-0882108221e64791a98223cda403efde-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:48 [async_llm.py:261] Added request cmpl-0882108221e64791a98223cda403efde-0.
INFO 03-02 01:05:50 [logger.py:42] Received request cmpl-a61845bcfe1e4e859e3abe51f4f628f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:50 [async_llm.py:261] Added request cmpl-a61845bcfe1e4e859e3abe51f4f628f6-0.
INFO 03-02 01:05:51 [logger.py:42] Received request cmpl-0a67755a18d0476795441ec246480e73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:51 [async_llm.py:261] Added request cmpl-0a67755a18d0476795441ec246480e73-0.
INFO 03-02 01:05:52 [logger.py:42] Received request cmpl-73bbd159f4c04cf08b9440ae0616876b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:52 [async_llm.py:261] Added request cmpl-73bbd159f4c04cf08b9440ae0616876b-0.
INFO 03-02 01:05:53 [logger.py:42] Received request cmpl-cff1c25bea2d4ef2b19c9dac806b73bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:53 [async_llm.py:261] Added request cmpl-cff1c25bea2d4ef2b19c9dac806b73bd-0.
INFO 03-02 01:05:54 [logger.py:42] Received request cmpl-f759f51001204295b5adf119d68575ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:54 [async_llm.py:261] Added request cmpl-f759f51001204295b5adf119d68575ee-0.
INFO 03-02 01:05:55 [logger.py:42] Received request cmpl-25c55aec852f4311b9e2eacaed6507c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:55 [async_llm.py:261] Added request cmpl-25c55aec852f4311b9e2eacaed6507c9-0.
INFO 03-02 01:05:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:05:56 [logger.py:42] Received request cmpl-d7c937910f954d169310f6f11a24b11c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:56 [async_llm.py:261] Added request cmpl-d7c937910f954d169310f6f11a24b11c-0.
INFO 03-02 01:05:57 [logger.py:42] Received request cmpl-31ace3ad8b5f445381e3f360d4c7f83e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:57 [async_llm.py:261] Added request cmpl-31ace3ad8b5f445381e3f360d4c7f83e-0.
INFO 03-02 01:05:58 [logger.py:42] Received request cmpl-e693250395ce4429a41a78ab2725d52e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:58 [async_llm.py:261] Added request cmpl-e693250395ce4429a41a78ab2725d52e-0.
INFO 03-02 01:05:59 [logger.py:42] Received request cmpl-8276f8569b1d4473baf3d5ececeac693-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:59 [async_llm.py:261] Added request cmpl-8276f8569b1d4473baf3d5ececeac693-0.
INFO 03-02 01:06:00 [logger.py:42] Received request cmpl-835d4b6c8c2f468691eb0ffe3d6c985b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:00 [async_llm.py:261] Added request cmpl-835d4b6c8c2f468691eb0ffe3d6c985b-0.
INFO 03-02 01:06:01 [logger.py:42] Received request cmpl-77f1ecbbbcad47f7bb2e1dd9780f3e68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:01 [async_llm.py:261] Added request cmpl-77f1ecbbbcad47f7bb2e1dd9780f3e68-0.
INFO 03-02 01:06:03 [logger.py:42] Received request cmpl-1de4012c293c4e7e8b30eec51d650795-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:03 [async_llm.py:261] Added request cmpl-1de4012c293c4e7e8b30eec51d650795-0.
INFO 03-02 01:06:04 [logger.py:42] Received request cmpl-ec41274a5bc44bdbbb20fdda084b61f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:04 [async_llm.py:261] Added request cmpl-ec41274a5bc44bdbbb20fdda084b61f8-0.
INFO 03-02 01:06:05 [logger.py:42] Received request cmpl-54364855aaa940e3ae9f4ac59d29b2df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:05 [async_llm.py:261] Added request cmpl-54364855aaa940e3ae9f4ac59d29b2df-0.
INFO 03-02 01:06:06 [logger.py:42] Received request cmpl-5f929b71f63542b585acc318c08f1a9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:06 [async_llm.py:261] Added request cmpl-5f929b71f63542b585acc318c08f1a9d-0.
INFO 03-02 01:06:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:06:07 [logger.py:42] Received request cmpl-ddf3f5c004ed4bbd8378be93f336c89d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:07 [async_llm.py:261] Added request cmpl-ddf3f5c004ed4bbd8378be93f336c89d-0.
INFO 03-02 01:06:08 [logger.py:42] Received request cmpl-156e85c2409c4f17b055e1d27236fa3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:08 [async_llm.py:261] Added request cmpl-156e85c2409c4f17b055e1d27236fa3a-0.
INFO 03-02 01:06:09 [logger.py:42] Received request cmpl-fe9831caa6464a7db3bd6980559b86b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:09 [async_llm.py:261] Added request cmpl-fe9831caa6464a7db3bd6980559b86b5-0.
INFO 03-02 01:06:10 [logger.py:42] Received request cmpl-3e436f74fdf943d18229dcb1e090da83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:10 [async_llm.py:261] Added request cmpl-3e436f74fdf943d18229dcb1e090da83-0.
INFO 03-02 01:06:11 [logger.py:42] Received request cmpl-f72cd0b1d9ab4e6d8fe98ab38a35fd17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:11 [async_llm.py:261] Added request cmpl-f72cd0b1d9ab4e6d8fe98ab38a35fd17-0.
INFO 03-02 01:06:12 [logger.py:42] Received request cmpl-a7775d09ffd34532858d0d9efd802a56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:12 [async_llm.py:261] Added request cmpl-a7775d09ffd34532858d0d9efd802a56-0.
INFO 03-02 01:06:13 [logger.py:42] Received request cmpl-01867eebdbfb4c4684c3bccd0e3eeae6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:13 [async_llm.py:261] Added request cmpl-01867eebdbfb4c4684c3bccd0e3eeae6-0.
INFO 03-02 01:06:14 [logger.py:42] Received request cmpl-887c6835632e40c9b9d73c6f780dae83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:14 [async_llm.py:261] Added request cmpl-887c6835632e40c9b9d73c6f780dae83-0.
INFO 03-02 01:06:16 [logger.py:42] Received request cmpl-cbc3308d99264fb4bde3a89fd3fe75e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:16 [async_llm.py:261] Added request cmpl-cbc3308d99264fb4bde3a89fd3fe75e0-0.
INFO 03-02 01:06:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:06:17 [logger.py:42] Received request cmpl-96f002c1df1f42fe94f6f83436858903-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:17 [async_llm.py:261] Added request cmpl-96f002c1df1f42fe94f6f83436858903-0.
INFO 03-02 01:06:18 [logger.py:42] Received request cmpl-0eb695f5a89744d18a11070f790e15a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:18 [async_llm.py:261] Added request cmpl-0eb695f5a89744d18a11070f790e15a4-0.
INFO 03-02 01:06:19 [logger.py:42] Received request cmpl-c2d331f191c243ea8b7bd42b4ed8029b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:19 [async_llm.py:261] Added request cmpl-c2d331f191c243ea8b7bd42b4ed8029b-0.
INFO 03-02 01:06:20 [logger.py:42] Received request cmpl-e48b3c605f8e457782c156e6e7d0b856-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:20 [async_llm.py:261] Added request cmpl-e48b3c605f8e457782c156e6e7d0b856-0.
INFO 03-02 01:06:21 [logger.py:42] Received request cmpl-5eb1f1a235b6467cb5f7f1cc09633d2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:21 [async_llm.py:261] Added request cmpl-5eb1f1a235b6467cb5f7f1cc09633d2a-0.
INFO 03-02 01:06:22 [logger.py:42] Received request cmpl-77becc4a65de4a1598c2c56c53478bac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:22 [async_llm.py:261] Added request cmpl-77becc4a65de4a1598c2c56c53478bac-0.
INFO 03-02 01:06:23 [logger.py:42] Received request cmpl-3ad57587e6814e0993b879351a14702e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:23 [async_llm.py:261] Added request cmpl-3ad57587e6814e0993b879351a14702e-0.
INFO 03-02 01:06:24 [logger.py:42] Received request cmpl-a7dc1551b6784b4b939eeba5aa4fd9a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:24 [async_llm.py:261] Added request cmpl-a7dc1551b6784b4b939eeba5aa4fd9a0-0.
INFO 03-02 01:06:25 [logger.py:42] Received request cmpl-a32bdab089e04681a48aeadda107f422-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:25 [async_llm.py:261] Added request cmpl-a32bdab089e04681a48aeadda107f422-0.
INFO 03-02 01:06:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:06:26 [logger.py:42] Received request cmpl-428b6abe89424137bc1bba4c6b77be4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:26 [async_llm.py:261] Added request cmpl-428b6abe89424137bc1bba4c6b77be4b-0.
INFO 03-02 01:06:27 [logger.py:42] Received request cmpl-d602a7e721dd4cb88c1fec42135da682-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:27 [async_llm.py:261] Added request cmpl-d602a7e721dd4cb88c1fec42135da682-0.
INFO 03-02 01:06:29 [logger.py:42] Received request cmpl-5e38d342d05042bd93840c67a8b7200b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:29 [async_llm.py:261] Added request cmpl-5e38d342d05042bd93840c67a8b7200b-0.
INFO 03-02 01:06:30 [logger.py:42] Received request cmpl-4bd87b2d781d475c94953edb86407e11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:30 [async_llm.py:261] Added request cmpl-4bd87b2d781d475c94953edb86407e11-0.
INFO 03-02 01:06:31 [logger.py:42] Received request cmpl-b32068f34d5f4658be09cc44a1705f25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:31 [async_llm.py:261] Added request cmpl-b32068f34d5f4658be09cc44a1705f25-0.
INFO 03-02 01:06:32 [logger.py:42] Received request cmpl-f6530d321d284418808a131dde0badb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:32 [async_llm.py:261] Added request cmpl-f6530d321d284418808a131dde0badb3-0.
INFO 03-02 01:06:33 [logger.py:42] Received request cmpl-a9eec0ff3b774302a9418cd5a3351df5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:33 [async_llm.py:261] Added request cmpl-a9eec0ff3b774302a9418cd5a3351df5-0.
INFO 03-02 01:06:34 [logger.py:42] Received request cmpl-85655994437e4906a0668a1e3ad0546a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:34 [async_llm.py:261] Added request cmpl-85655994437e4906a0668a1e3ad0546a-0.
INFO 03-02 01:06:35 [logger.py:42] Received request cmpl-16f61dcee0364242bda399637d94a36e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:35 [async_llm.py:261] Added request cmpl-16f61dcee0364242bda399637d94a36e-0.
INFO 03-02 01:06:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:06:36 [logger.py:42] Received request cmpl-1930a157e95a4eaeaf0066d663cbf7af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:36 [async_llm.py:261] Added request cmpl-1930a157e95a4eaeaf0066d663cbf7af-0.
INFO 03-02 01:06:37 [logger.py:42] Received request cmpl-9dd7f533de7449a9b560f5cb144a725e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:37 [async_llm.py:261] Added request cmpl-9dd7f533de7449a9b560f5cb144a725e-0.
INFO 03-02 01:06:38 [logger.py:42] Received request cmpl-f1527864e61b4ec18e050bdd0a53c914-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:38 [async_llm.py:261] Added request cmpl-f1527864e61b4ec18e050bdd0a53c914-0.
INFO 03-02 01:06:39 [logger.py:42] Received request cmpl-c18972892f6f48ca8604a0026bfb998d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:39 [async_llm.py:261] Added request cmpl-c18972892f6f48ca8604a0026bfb998d-0.
INFO 03-02 01:06:40 [logger.py:42] Received request cmpl-daa99953702041ba8ca98010c306dbe3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:40 [async_llm.py:261] Added request cmpl-daa99953702041ba8ca98010c306dbe3-0.
INFO 03-02 01:06:42 [logger.py:42] Received request cmpl-e14c0007dbdc48aea952c6bbb9be6b8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:42 [async_llm.py:261] Added request cmpl-e14c0007dbdc48aea952c6bbb9be6b8d-0.
INFO 03-02 01:06:43 [logger.py:42] Received request cmpl-460a0baad0234f5f824f1746a5b74df6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:43 [async_llm.py:261] Added request cmpl-460a0baad0234f5f824f1746a5b74df6-0.
INFO 03-02 01:06:44 [logger.py:42] Received request cmpl-d69885cc08f04277a063197bf344951e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:44 [async_llm.py:261] Added request cmpl-d69885cc08f04277a063197bf344951e-0.
INFO 03-02 01:06:45 [logger.py:42] Received request cmpl-9969a9fee7d2468fbb72d7eecef041be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:45 [async_llm.py:261] Added request cmpl-9969a9fee7d2468fbb72d7eecef041be-0.
INFO 03-02 01:06:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:06:46 [logger.py:42] Received request cmpl-4cc940a99cbd479cb8d63364616c0e78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:46 [async_llm.py:261] Added request cmpl-4cc940a99cbd479cb8d63364616c0e78-0.
INFO 03-02 01:06:47 [logger.py:42] Received request cmpl-090efb2c5f58431cb598fc9df0fa8808-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:47 [async_llm.py:261] Added request cmpl-090efb2c5f58431cb598fc9df0fa8808-0.
INFO 03-02 01:06:48 [logger.py:42] Received request cmpl-f566d66e1aa84fdcb9667d37a54f2f12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:48 [async_llm.py:261] Added request cmpl-f566d66e1aa84fdcb9667d37a54f2f12-0.
INFO 03-02 01:06:49 [logger.py:42] Received request cmpl-08337795b5f24ecfbc88dd19637b2c5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:49 [async_llm.py:261] Added request cmpl-08337795b5f24ecfbc88dd19637b2c5a-0.
INFO 03-02 01:06:50 [logger.py:42] Received request cmpl-54d35ab862804102892064f50114096e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:50 [async_llm.py:261] Added request cmpl-54d35ab862804102892064f50114096e-0.
INFO 03-02 01:06:51 [logger.py:42] Received request cmpl-0398141a4a7341458082ab4ded165e8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:51 [async_llm.py:261] Added request cmpl-0398141a4a7341458082ab4ded165e8e-0.
INFO 03-02 01:06:52 [logger.py:42] Received request cmpl-33111d14aa9647d58e4b285ba0daaa7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:52 [async_llm.py:261] Added request cmpl-33111d14aa9647d58e4b285ba0daaa7d-0.
INFO 03-02 01:06:53 [logger.py:42] Received request cmpl-6b825b1a0d1e4784bad456ac5df201f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:53 [async_llm.py:261] Added request cmpl-6b825b1a0d1e4784bad456ac5df201f1-0.
INFO 03-02 01:06:55 [logger.py:42] Received request cmpl-1b6682eb150a4291a609754e8ccc14b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:55 [async_llm.py:261] Added request cmpl-1b6682eb150a4291a609754e8ccc14b4-0.
INFO 03-02 01:06:56 [logger.py:42] Received request cmpl-3cc8c3f7653a46a6be784f120b45fad5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:56 [async_llm.py:261] Added request cmpl-3cc8c3f7653a46a6be784f120b45fad5-0.
INFO 03-02 01:06:56 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:06:57 [logger.py:42] Received request cmpl-168675a57b80423c8553ba9bc1463a72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:57 [async_llm.py:261] Added request cmpl-168675a57b80423c8553ba9bc1463a72-0.
INFO 03-02 01:06:58 [logger.py:42] Received request cmpl-21bf148476564830b573bf4c72327a36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:58 [async_llm.py:261] Added request cmpl-21bf148476564830b573bf4c72327a36-0.
INFO 03-02 01:06:59 [logger.py:42] Received request cmpl-9b24bd55ea8347ddad3700f1d15b9774-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:59 [async_llm.py:261] Added request cmpl-9b24bd55ea8347ddad3700f1d15b9774-0.
INFO 03-02 01:07:00 [logger.py:42] Received request cmpl-ccd44f363835410b9a0d9cdcb00bd122-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:00 [async_llm.py:261] Added request cmpl-ccd44f363835410b9a0d9cdcb00bd122-0.
INFO 03-02 01:07:01 [logger.py:42] Received request cmpl-24fb7e37ff264a4aa79f8fc62cd91120-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:01 [async_llm.py:261] Added request cmpl-24fb7e37ff264a4aa79f8fc62cd91120-0.
INFO 03-02 01:07:02 [logger.py:42] Received request cmpl-c2d72fa1659c42fda9b1176542207570-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:02 [async_llm.py:261] Added request cmpl-c2d72fa1659c42fda9b1176542207570-0.
INFO 03-02 01:07:03 [logger.py:42] Received request cmpl-33b7a46204d04ce9aaef2ac2ee9aa6a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:03 [async_llm.py:261] Added request cmpl-33b7a46204d04ce9aaef2ac2ee9aa6a0-0.
INFO 03-02 01:07:04 [logger.py:42] Received request cmpl-5b338a3a3c5b4ee0a63db57911fb9bd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:04 [async_llm.py:261] Added request cmpl-5b338a3a3c5b4ee0a63db57911fb9bd5-0.
INFO 03-02 01:07:05 [logger.py:42] Received request cmpl-91cbe6e4f89f4031b104413d1bf69c82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:05 [async_llm.py:261] Added request cmpl-91cbe6e4f89f4031b104413d1bf69c82-0.
INFO 03-02 01:07:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:07:06 [logger.py:42] Received request cmpl-82d1cbba933b4b52bb979a501b3691c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:06 [async_llm.py:261] Added request cmpl-82d1cbba933b4b52bb979a501b3691c6-0.
INFO 03-02 01:07:08 [logger.py:42] Received request cmpl-983c0ae213f94f86bd930a63b2e6b531-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:08 [async_llm.py:261] Added request cmpl-983c0ae213f94f86bd930a63b2e6b531-0.
INFO 03-02 01:07:09 [logger.py:42] Received request cmpl-3f638aa85e814f33ab84564acac2affb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:09 [async_llm.py:261] Added request cmpl-3f638aa85e814f33ab84564acac2affb-0.
INFO 03-02 01:07:10 [logger.py:42] Received request cmpl-c652e01f1f7e4b2fb38503580601cbcc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:10 [async_llm.py:261] Added request cmpl-c652e01f1f7e4b2fb38503580601cbcc-0.
INFO 03-02 01:07:11 [logger.py:42] Received request cmpl-3b42cbe0b57b4c449051b310a85ccc80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:11 [async_llm.py:261] Added request cmpl-3b42cbe0b57b4c449051b310a85ccc80-0.
INFO 03-02 01:07:12 [logger.py:42] Received request cmpl-d827eb7a14434d798d1ce040a64058d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:12 [async_llm.py:261] Added request cmpl-d827eb7a14434d798d1ce040a64058d4-0.
INFO 03-02 01:07:13 [logger.py:42] Received request cmpl-cd82a8f772fe4ebcb80475a77f267bd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:13 [async_llm.py:261] Added request cmpl-cd82a8f772fe4ebcb80475a77f267bd1-0.
INFO 03-02 01:07:14 [logger.py:42] Received request cmpl-b6b70537d077436f8a529967745fb8e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:14 [async_llm.py:261] Added request cmpl-b6b70537d077436f8a529967745fb8e2-0.
INFO 03-02 01:07:15 [logger.py:42] Received request cmpl-40197ae6d73942b8acf492b8ca218aaa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:15 [async_llm.py:261] Added request cmpl-40197ae6d73942b8acf492b8ca218aaa-0.
INFO 03-02 01:07:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:07:16 [logger.py:42] Received request cmpl-67deab5dcb7c40f299de590a23f21fe0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:16 [async_llm.py:261] Added request cmpl-67deab5dcb7c40f299de590a23f21fe0-0.
INFO 03-02 01:07:17 [logger.py:42] Received request cmpl-dfacdbe44cd94fe4800b9651d760f1bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:17 [async_llm.py:261] Added request cmpl-dfacdbe44cd94fe4800b9651d760f1bb-0.
INFO 03-02 01:07:18 [logger.py:42] Received request cmpl-210714aba11043aead975632c1536bde-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:18 [async_llm.py:261] Added request cmpl-210714aba11043aead975632c1536bde-0.
INFO 03-02 01:07:19 [logger.py:42] Received request cmpl-205f03900c99496d95a30b6a93518aee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:19 [async_llm.py:261] Added request cmpl-205f03900c99496d95a30b6a93518aee-0.
INFO 03-02 01:07:21 [logger.py:42] Received request cmpl-d103e89af51d47b7b345d0d909f71e75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:21 [async_llm.py:261] Added request cmpl-d103e89af51d47b7b345d0d909f71e75-0.
INFO 03-02 01:07:22 [logger.py:42] Received request cmpl-84afb2f7e6124af1ae17cd9c6eab1a9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:22 [async_llm.py:261] Added request cmpl-84afb2f7e6124af1ae17cd9c6eab1a9e-0.
INFO 03-02 01:07:23 [logger.py:42] Received request cmpl-00707da95c81432dad1eb10cfec00ed7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:23 [async_llm.py:261] Added request cmpl-00707da95c81432dad1eb10cfec00ed7-0.
INFO 03-02 01:07:24 [logger.py:42] Received request cmpl-fc8d5b4e706a450d80ab160778085abb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:24 [async_llm.py:261] Added request cmpl-fc8d5b4e706a450d80ab160778085abb-0.
INFO 03-02 01:07:25 [logger.py:42] Received request cmpl-f58de4aa45844f7eafab2075d64c8a71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:25 [async_llm.py:261] Added request cmpl-f58de4aa45844f7eafab2075d64c8a71-0.
INFO 03-02 01:07:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:07:26 [logger.py:42] Received request cmpl-df91566e14144b19be0acb5c4b225256-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:26 [async_llm.py:261] Added request cmpl-df91566e14144b19be0acb5c4b225256-0.
INFO 03-02 01:07:27 [logger.py:42] Received request cmpl-65418fcdbccd477cb5167f596cc1d1c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:27 [async_llm.py:261] Added request cmpl-65418fcdbccd477cb5167f596cc1d1c5-0.
INFO 03-02 01:07:28 [logger.py:42] Received request cmpl-0b560e0e5cfa499785436575f235f039-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:28 [async_llm.py:261] Added request cmpl-0b560e0e5cfa499785436575f235f039-0.
INFO 03-02 01:07:29 [logger.py:42] Received request cmpl-6dab213ea3d74e25bcfd71255f5941c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:29 [async_llm.py:261] Added request cmpl-6dab213ea3d74e25bcfd71255f5941c5-0.
INFO 03-02 01:07:30 [logger.py:42] Received request cmpl-712a70e9b4064b20b502b29ff47b532e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:30 [async_llm.py:261] Added request cmpl-712a70e9b4064b20b502b29ff47b532e-0.
INFO 03-02 01:07:31 [logger.py:42] Received request cmpl-254ebea4a5df4041999ac2332a1eac9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:31 [async_llm.py:261] Added request cmpl-254ebea4a5df4041999ac2332a1eac9b-0.
INFO 03-02 01:07:32 [logger.py:42] Received request cmpl-82353b6b5e0d4df0981d86954b9b72d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:32 [async_llm.py:261] Added request cmpl-82353b6b5e0d4df0981d86954b9b72d6-0.
INFO 03-02 01:07:34 [logger.py:42] Received request cmpl-6bbfbb70ad5040d7b195d7ae7d4f4cb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:34 [async_llm.py:261] Added request cmpl-6bbfbb70ad5040d7b195d7ae7d4f4cb9-0.
INFO 03-02 01:07:35 [logger.py:42] Received request cmpl-cc7e087f29e74e6aa67b8a82802a2529-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:35 [async_llm.py:261] Added request cmpl-cc7e087f29e74e6aa67b8a82802a2529-0.
INFO 03-02 01:07:36 [logger.py:42] Received request cmpl-3e8901983c0c45d8abe0509b9a2189a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:36 [async_llm.py:261] Added request cmpl-3e8901983c0c45d8abe0509b9a2189a5-0.
INFO 03-02 01:07:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:07:37 [logger.py:42] Received request cmpl-19cda7079caf4f8d92c9ce500e19b740-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:37 [async_llm.py:261] Added request cmpl-19cda7079caf4f8d92c9ce500e19b740-0.
INFO 03-02 01:07:38 [logger.py:42] Received request cmpl-09a980d9904640c080649d610f0a875c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:38 [async_llm.py:261] Added request cmpl-09a980d9904640c080649d610f0a875c-0.
INFO 03-02 01:07:39 [logger.py:42] Received request cmpl-3a3b3b7396dd4405a3d4f964591e1cc4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:39 [async_llm.py:261] Added request cmpl-3a3b3b7396dd4405a3d4f964591e1cc4-0.
INFO 03-02 01:07:40 [logger.py:42] Received request cmpl-02be4ac324e0497a9067af37e46436d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:40 [async_llm.py:261] Added request cmpl-02be4ac324e0497a9067af37e46436d5-0.
INFO 03-02 01:07:41 [logger.py:42] Received request cmpl-07193e695cce40a3a1b3a2be6a5790ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:41 [async_llm.py:261] Added request cmpl-07193e695cce40a3a1b3a2be6a5790ba-0.
INFO 03-02 01:07:42 [logger.py:42] Received request cmpl-eb38ac6cadf741f0b0ef90236ab6353f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:42 [async_llm.py:261] Added request cmpl-eb38ac6cadf741f0b0ef90236ab6353f-0.
INFO 03-02 01:07:43 [logger.py:42] Received request cmpl-d4a306ae80064b8f87caadf42cde797e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:43 [async_llm.py:261] Added request cmpl-d4a306ae80064b8f87caadf42cde797e-0.
INFO 03-02 01:07:44 [logger.py:42] Received request cmpl-38c0c27b0fac44bb897ece7f9edcab34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:44 [async_llm.py:261] Added request cmpl-38c0c27b0fac44bb897ece7f9edcab34-0.
INFO 03-02 01:07:46 [logger.py:42] Received request cmpl-d80788d023cb48e4abf7f1c2e7c764ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:46 [async_llm.py:261] Added request cmpl-d80788d023cb48e4abf7f1c2e7c764ae-0.
INFO 03-02 01:07:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:07:47 [logger.py:42] Received request cmpl-92d38cb67ddf4ad9bf14c0ea86416f8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:47 [async_llm.py:261] Added request cmpl-92d38cb67ddf4ad9bf14c0ea86416f8f-0.
INFO 03-02 01:07:48 [logger.py:42] Received request cmpl-c61bad4542bf433f8f7a8bbf82cc96be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:48 [async_llm.py:261] Added request cmpl-c61bad4542bf433f8f7a8bbf82cc96be-0.
INFO 03-02 01:07:49 [logger.py:42] Received request cmpl-c6300e7b1ac24842a702b60195977729-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:49 [async_llm.py:261] Added request cmpl-c6300e7b1ac24842a702b60195977729-0.
INFO 03-02 01:07:50 [logger.py:42] Received request cmpl-c7dadc91e64e47d5b670f533790256a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:50 [async_llm.py:261] Added request cmpl-c7dadc91e64e47d5b670f533790256a0-0.
INFO 03-02 01:07:51 [logger.py:42] Received request cmpl-4f4df183d8e14649bc2a2db1331c0b27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:51 [async_llm.py:261] Added request cmpl-4f4df183d8e14649bc2a2db1331c0b27-0.
INFO 03-02 01:07:52 [logger.py:42] Received request cmpl-14633d3a21f94279a259dee300c17f98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:52 [async_llm.py:261] Added request cmpl-14633d3a21f94279a259dee300c17f98-0.
INFO 03-02 01:07:53 [logger.py:42] Received request cmpl-2e52eda4ba7749ee936ccc5253c4ee58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:53 [async_llm.py:261] Added request cmpl-2e52eda4ba7749ee936ccc5253c4ee58-0.
INFO 03-02 01:07:54 [logger.py:42] Received request cmpl-7208032ac2a84e828691e6f46a2c9199-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:54 [async_llm.py:261] Added request cmpl-7208032ac2a84e828691e6f46a2c9199-0.
INFO 03-02 01:07:55 [logger.py:42] Received request cmpl-d273ecd478434355b9838257930b3b59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:55 [async_llm.py:261] Added request cmpl-d273ecd478434355b9838257930b3b59-0.
INFO 03-02 01:07:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:07:56 [logger.py:42] Received request cmpl-d88384fee241479bb209e7a8d0cb1d49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:56 [async_llm.py:261] Added request cmpl-d88384fee241479bb209e7a8d0cb1d49-0.
INFO 03-02 01:07:57 [logger.py:42] Received request cmpl-d9ef6ff888114fabb0ca5be49a66e680-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:57 [async_llm.py:261] Added request cmpl-d9ef6ff888114fabb0ca5be49a66e680-0.
INFO 03-02 01:07:59 [logger.py:42] Received request cmpl-e9130c22243d421eb039b4461806b0ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:59 [async_llm.py:261] Added request cmpl-e9130c22243d421eb039b4461806b0ca-0.
INFO 03-02 01:08:00 [logger.py:42] Received request cmpl-bd3c23a210624947b71f60f1a3b90ccc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:00 [async_llm.py:261] Added request cmpl-bd3c23a210624947b71f60f1a3b90ccc-0.
INFO 03-02 01:08:01 [logger.py:42] Received request cmpl-c04d26e0bdbe419386164a0a323f60cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:01 [async_llm.py:261] Added request cmpl-c04d26e0bdbe419386164a0a323f60cf-0.
INFO 03-02 01:08:02 [logger.py:42] Received request cmpl-461b9363ac1c4de79126190d3ac89471-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:02 [async_llm.py:261] Added request cmpl-461b9363ac1c4de79126190d3ac89471-0.
INFO 03-02 01:08:03 [logger.py:42] Received request cmpl-b24a8d8e9aa94aa58b74b8e730788e8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:03 [async_llm.py:261] Added request cmpl-b24a8d8e9aa94aa58b74b8e730788e8d-0.
INFO 03-02 01:08:04 [logger.py:42] Received request cmpl-64c90f54891849a5807ba832ca9dfff7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:04 [async_llm.py:261] Added request cmpl-64c90f54891849a5807ba832ca9dfff7-0.
INFO 03-02 01:08:05 [logger.py:42] Received request cmpl-4dfdc9f201964ba195c91a8594e4a674-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:05 [async_llm.py:261] Added request cmpl-4dfdc9f201964ba195c91a8594e4a674-0.
INFO 03-02 01:08:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:08:06 [logger.py:42] Received request cmpl-be05ac8beb33445391b175d76fda79ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:06 [async_llm.py:261] Added request cmpl-be05ac8beb33445391b175d76fda79ce-0.
INFO 03-02 01:08:07 [logger.py:42] Received request cmpl-a2303a7f4080443c8353b0811c533ec0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:07 [async_llm.py:261] Added request cmpl-a2303a7f4080443c8353b0811c533ec0-0.
INFO 03-02 01:08:08 [logger.py:42] Received request cmpl-ca23b078512a4b599f965602a8dc7552-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:08 [async_llm.py:261] Added request cmpl-ca23b078512a4b599f965602a8dc7552-0.
INFO 03-02 01:08:09 [logger.py:42] Received request cmpl-af1b6063ae884c0e8d3429e1d296a648-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:09 [async_llm.py:261] Added request cmpl-af1b6063ae884c0e8d3429e1d296a648-0.
INFO 03-02 01:08:10 [logger.py:42] Received request cmpl-4e21dc3790d547e2ac3d1bdb969ad1c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:10 [async_llm.py:261] Added request cmpl-4e21dc3790d547e2ac3d1bdb969ad1c6-0.
INFO 03-02 01:08:12 [logger.py:42] Received request cmpl-8b674e0861b544349e8273ff09a93c31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:12 [async_llm.py:261] Added request cmpl-8b674e0861b544349e8273ff09a93c31-0.
INFO 03-02 01:08:13 [logger.py:42] Received request cmpl-d59cacb9a3f4442f9bb9630892683e2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:13 [async_llm.py:261] Added request cmpl-d59cacb9a3f4442f9bb9630892683e2e-0.
INFO 03-02 01:08:14 [logger.py:42] Received request cmpl-8f6e54a0c87a45c49960b0155271d38e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:14 [async_llm.py:261] Added request cmpl-8f6e54a0c87a45c49960b0155271d38e-0.
INFO 03-02 01:08:15 [logger.py:42] Received request cmpl-a73a6a3091fb41d18bd66bfea0cf84fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:15 [async_llm.py:261] Added request cmpl-a73a6a3091fb41d18bd66bfea0cf84fe-0.
INFO 03-02 01:08:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:08:16 [logger.py:42] Received request cmpl-72eef9a28bce406594797f6a616724d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:16 [async_llm.py:261] Added request cmpl-72eef9a28bce406594797f6a616724d1-0.
INFO 03-02 01:08:17 [logger.py:42] Received request cmpl-e104ba065ef046edb78ae964b44329d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:17 [async_llm.py:261] Added request cmpl-e104ba065ef046edb78ae964b44329d2-0.
INFO 03-02 01:08:18 [logger.py:42] Received request cmpl-89709804bb01468ebd5abd537abc8fbb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:18 [async_llm.py:261] Added request cmpl-89709804bb01468ebd5abd537abc8fbb-0.
INFO 03-02 01:08:19 [logger.py:42] Received request cmpl-cffa1035cc664b418f59217a144adf3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:19 [async_llm.py:261] Added request cmpl-cffa1035cc664b418f59217a144adf3a-0.
INFO 03-02 01:08:20 [logger.py:42] Received request cmpl-e6ee001045c64e40b1c0f725b90fcf52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:20 [async_llm.py:261] Added request cmpl-e6ee001045c64e40b1c0f725b90fcf52-0.
INFO 03-02 01:08:21 [logger.py:42] Received request cmpl-2b172b79a21d41899fa7dee3d70dfe4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:21 [async_llm.py:261] Added request cmpl-2b172b79a21d41899fa7dee3d70dfe4e-0.
INFO 03-02 01:08:22 [logger.py:42] Received request cmpl-894fc6e317924d5587f15cf15c8ac424-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:22 [async_llm.py:261] Added request cmpl-894fc6e317924d5587f15cf15c8ac424-0.
INFO 03-02 01:08:23 [logger.py:42] Received request cmpl-a06ea000963048bfbdb8dc88cdc0fe6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:23 [async_llm.py:261] Added request cmpl-a06ea000963048bfbdb8dc88cdc0fe6e-0.
INFO 03-02 01:08:25 [logger.py:42] Received request cmpl-562c0d05d7be481490e9e1db17de647d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:25 [async_llm.py:261] Added request cmpl-562c0d05d7be481490e9e1db17de647d-0.
INFO 03-02 01:08:26 [logger.py:42] Received request cmpl-89fa6e8249d84d38ad02aa014667396a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:26 [async_llm.py:261] Added request cmpl-89fa6e8249d84d38ad02aa014667396a-0.
INFO 03-02 01:08:26 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:08:27 [logger.py:42] Received request cmpl-329cf49ee44e4630a512ddff0e8a9672-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:27 [async_llm.py:261] Added request cmpl-329cf49ee44e4630a512ddff0e8a9672-0.
INFO 03-02 01:08:28 [logger.py:42] Received request cmpl-47a100514b5745f2b59eeae2f67f65b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:28 [async_llm.py:261] Added request cmpl-47a100514b5745f2b59eeae2f67f65b5-0.
INFO 03-02 01:08:29 [logger.py:42] Received request cmpl-5f697595f3d548ce8c0d5fea1151b9d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:29 [async_llm.py:261] Added request cmpl-5f697595f3d548ce8c0d5fea1151b9d0-0.
INFO 03-02 01:08:30 [logger.py:42] Received request cmpl-25830832e8414ea38cbe09e7380d5b00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:30 [async_llm.py:261] Added request cmpl-25830832e8414ea38cbe09e7380d5b00-0.
INFO 03-02 01:08:31 [logger.py:42] Received request cmpl-5d60de243f8e4b62b47765e53ae63e0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:31 [async_llm.py:261] Added request cmpl-5d60de243f8e4b62b47765e53ae63e0d-0.
INFO 03-02 01:08:32 [logger.py:42] Received request cmpl-3e49d6ebd6404246837e8c443c0058cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:32 [async_llm.py:261] Added request cmpl-3e49d6ebd6404246837e8c443c0058cc-0.
INFO 03-02 01:08:33 [logger.py:42] Received request cmpl-b2a8e1dd3ab34afcb0b13ff51c35aba1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:33 [async_llm.py:261] Added request cmpl-b2a8e1dd3ab34afcb0b13ff51c35aba1-0.
INFO 03-02 01:08:34 [logger.py:42] Received request cmpl-d47893e76ae7490480e2d3fc31399048-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:34 [async_llm.py:261] Added request cmpl-d47893e76ae7490480e2d3fc31399048-0.
INFO 03-02 01:08:35 [logger.py:42] Received request cmpl-0f2c6fb662474911bdbc01d3746237ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:35 [async_llm.py:261] Added request cmpl-0f2c6fb662474911bdbc01d3746237ff-0.
INFO 03-02 01:08:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:08:36 [logger.py:42] Received request cmpl-100fd04694c64662b1d756d4851244f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:36 [async_llm.py:261] Added request cmpl-100fd04694c64662b1d756d4851244f5-0.
INFO 03-02 01:08:38 [logger.py:42] Received request cmpl-0bbb183799824f4987848cef9caebe7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:38 [async_llm.py:261] Added request cmpl-0bbb183799824f4987848cef9caebe7a-0.
INFO 03-02 01:08:39 [logger.py:42] Received request cmpl-cea749ce1aeb437b9413a1ccfdd96981-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:39 [async_llm.py:261] Added request cmpl-cea749ce1aeb437b9413a1ccfdd96981-0.
INFO 03-02 01:08:40 [logger.py:42] Received request cmpl-7b1e12ab233341ad8fcb0853db05273d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:40 [async_llm.py:261] Added request cmpl-7b1e12ab233341ad8fcb0853db05273d-0.
INFO 03-02 01:08:41 [logger.py:42] Received request cmpl-a9cdf04028a94d519ad610de4488ba36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:41 [async_llm.py:261] Added request cmpl-a9cdf04028a94d519ad610de4488ba36-0.
INFO 03-02 01:08:42 [logger.py:42] Received request cmpl-6e391bf7cff348de8745ce275300c0ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:42 [async_llm.py:261] Added request cmpl-6e391bf7cff348de8745ce275300c0ab-0.
INFO 03-02 01:08:43 [logger.py:42] Received request cmpl-e8b5f37a6deb476cb794cfa1e0f641a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:43 [async_llm.py:261] Added request cmpl-e8b5f37a6deb476cb794cfa1e0f641a0-0.
INFO 03-02 01:08:44 [logger.py:42] Received request cmpl-46966e5c9fe442a68dab710ba1d0810e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:44 [async_llm.py:261] Added request cmpl-46966e5c9fe442a68dab710ba1d0810e-0.
INFO 03-02 01:08:45 [logger.py:42] Received request cmpl-067ac4ca6d6a46dcaf765288c1df86a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:45 [async_llm.py:261] Added request cmpl-067ac4ca6d6a46dcaf765288c1df86a5-0.
INFO 03-02 01:08:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:08:46 [logger.py:42] Received request cmpl-df6ddbcbc64c47be88a46b3aacc973d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:46 [async_llm.py:261] Added request cmpl-df6ddbcbc64c47be88a46b3aacc973d2-0.
INFO 03-02 01:08:47 [logger.py:42] Received request cmpl-4ea3bc929dd34dc088386a09a2c88513-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:47 [async_llm.py:261] Added request cmpl-4ea3bc929dd34dc088386a09a2c88513-0.
INFO 03-02 01:08:48 [logger.py:42] Received request cmpl-6879df8a4f5d4eaeb53a90307d4dcf35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:48 [async_llm.py:261] Added request cmpl-6879df8a4f5d4eaeb53a90307d4dcf35-0.
INFO 03-02 01:08:49 [logger.py:42] Received request cmpl-f2a62c9a2b4d44a0af6ea838fd394392-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:49 [async_llm.py:261] Added request cmpl-f2a62c9a2b4d44a0af6ea838fd394392-0.
INFO 03-02 01:08:51 [logger.py:42] Received request cmpl-56bb64ff76914ddbb7f78f1e1c13a963-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:51 [async_llm.py:261] Added request cmpl-56bb64ff76914ddbb7f78f1e1c13a963-0.
INFO 03-02 01:08:52 [logger.py:42] Received request cmpl-2d52a083d8eb46bb9b45c4cd565fd100-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:52 [async_llm.py:261] Added request cmpl-2d52a083d8eb46bb9b45c4cd565fd100-0.
INFO 03-02 01:08:53 [logger.py:42] Received request cmpl-1b73de84936c46e68cde1cd9bc40e7a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:53 [async_llm.py:261] Added request cmpl-1b73de84936c46e68cde1cd9bc40e7a3-0.
INFO 03-02 01:08:54 [logger.py:42] Received request cmpl-b687df7f9117419b901cab9cdb77f097-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:54 [async_llm.py:261] Added request cmpl-b687df7f9117419b901cab9cdb77f097-0.
INFO 03-02 01:08:55 [logger.py:42] Received request cmpl-ac4a8f24d10649e3a7684f7cd61d10e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:55 [async_llm.py:261] Added request cmpl-ac4a8f24d10649e3a7684f7cd61d10e7-0.
INFO 03-02 01:08:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:08:56 [logger.py:42] Received request cmpl-42950c906c64481ca9f4ad9b0b540ac8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:56 [async_llm.py:261] Added request cmpl-42950c906c64481ca9f4ad9b0b540ac8-0.
INFO 03-02 01:08:57 [logger.py:42] Received request cmpl-65551170f4a440e393a8c9eb6f9061f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:57 [async_llm.py:261] Added request cmpl-65551170f4a440e393a8c9eb6f9061f7-0.
INFO 03-02 01:08:58 [logger.py:42] Received request cmpl-af71dd4e1e7446e8a579d7cb26b81e79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:58 [async_llm.py:261] Added request cmpl-af71dd4e1e7446e8a579d7cb26b81e79-0.
INFO 03-02 01:08:59 [logger.py:42] Received request cmpl-8c495ebadf9f480c8191a62ccab5eca4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:59 [async_llm.py:261] Added request cmpl-8c495ebadf9f480c8191a62ccab5eca4-0.
INFO 03-02 01:09:00 [logger.py:42] Received request cmpl-72c1594aaea448a2b5302974c9edb4ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:00 [async_llm.py:261] Added request cmpl-72c1594aaea448a2b5302974c9edb4ea-0.
INFO 03-02 01:09:01 [logger.py:42] Received request cmpl-5455f6fffcc145bbbe76283a7f5396e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:01 [async_llm.py:261] Added request cmpl-5455f6fffcc145bbbe76283a7f5396e5-0.
INFO 03-02 01:09:02 [logger.py:42] Received request cmpl-75a28a734c6941c0a8ed26f0c499efec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:02 [async_llm.py:261] Added request cmpl-75a28a734c6941c0a8ed26f0c499efec-0.
INFO 03-02 01:09:04 [logger.py:42] Received request cmpl-75288b63a59744a6bbc46d3172b5d6d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:04 [async_llm.py:261] Added request cmpl-75288b63a59744a6bbc46d3172b5d6d3-0.
INFO 03-02 01:09:05 [logger.py:42] Received request cmpl-fcec309b55e94237ac476e7617b1dbf5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:05 [async_llm.py:261] Added request cmpl-fcec309b55e94237ac476e7617b1dbf5-0.
INFO 03-02 01:09:06 [logger.py:42] Received request cmpl-cca0f31935d24e959aca3e4c8c4c1b0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:06 [async_llm.py:261] Added request cmpl-cca0f31935d24e959aca3e4c8c4c1b0c-0.
INFO 03-02 01:09:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:09:07 [logger.py:42] Received request cmpl-59755d13477f463282e99fa0ed6d3a88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:07 [async_llm.py:261] Added request cmpl-59755d13477f463282e99fa0ed6d3a88-0.
INFO 03-02 01:09:08 [logger.py:42] Received request cmpl-7db6b33e54cb4c81b8840bc0a4e4acc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:08 [async_llm.py:261] Added request cmpl-7db6b33e54cb4c81b8840bc0a4e4acc9-0.
INFO 03-02 01:09:09 [logger.py:42] Received request cmpl-dd6b966712ae4e6cb866ecbee1d374c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:09 [async_llm.py:261] Added request cmpl-dd6b966712ae4e6cb866ecbee1d374c0-0.
INFO 03-02 01:09:10 [logger.py:42] Received request cmpl-b6eaa5c3d86649fe93db5ad6092f7558-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:10 [async_llm.py:261] Added request cmpl-b6eaa5c3d86649fe93db5ad6092f7558-0.
INFO 03-02 01:09:11 [logger.py:42] Received request cmpl-7295607bac4c4a01b0e1c76d3ed5e7b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:11 [async_llm.py:261] Added request cmpl-7295607bac4c4a01b0e1c76d3ed5e7b5-0.
INFO 03-02 01:09:12 [logger.py:42] Received request cmpl-4a516b27b47f4d4ba19c768f7f6777e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:12 [async_llm.py:261] Added request cmpl-4a516b27b47f4d4ba19c768f7f6777e2-0.
INFO 03-02 01:09:13 [logger.py:42] Received request cmpl-b5fe6d86772b44e8bb4b15d0af4b4b9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:13 [async_llm.py:261] Added request cmpl-b5fe6d86772b44e8bb4b15d0af4b4b9e-0.
INFO 03-02 01:09:14 [logger.py:42] Received request cmpl-ad657bceb37f473781236e7f5e7c7666-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:14 [async_llm.py:261] Added request cmpl-ad657bceb37f473781236e7f5e7c7666-0.
INFO 03-02 01:09:15 [logger.py:42] Received request cmpl-870a54c1e9e947acaf60bd5e434b2d52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:15 [async_llm.py:261] Added request cmpl-870a54c1e9e947acaf60bd5e434b2d52-0.
INFO 03-02 01:09:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:09:17 [logger.py:42] Received request cmpl-95edbe49110f4770961c9c98f3a406c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:17 [async_llm.py:261] Added request cmpl-95edbe49110f4770961c9c98f3a406c9-0.
INFO 03-02 01:09:18 [logger.py:42] Received request cmpl-54667130e55f4f7a8582edac57ef1d93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:18 [async_llm.py:261] Added request cmpl-54667130e55f4f7a8582edac57ef1d93-0.
INFO 03-02 01:09:19 [logger.py:42] Received request cmpl-19a2fbf81d404bdcae85c6cf4ec338ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:19 [async_llm.py:261] Added request cmpl-19a2fbf81d404bdcae85c6cf4ec338ad-0.
INFO 03-02 01:09:20 [logger.py:42] Received request cmpl-61d4837c37664abfa1585a1f31f16e2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:20 [async_llm.py:261] Added request cmpl-61d4837c37664abfa1585a1f31f16e2e-0.
INFO 03-02 01:09:21 [logger.py:42] Received request cmpl-cf6c4fb1d2964b429ed2af8c77b8234f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:21 [async_llm.py:261] Added request cmpl-cf6c4fb1d2964b429ed2af8c77b8234f-0.
INFO 03-02 01:09:22 [logger.py:42] Received request cmpl-a8d52a247ad14e80a23f3cdd0e3884e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:22 [async_llm.py:261] Added request cmpl-a8d52a247ad14e80a23f3cdd0e3884e8-0.
INFO 03-02 01:09:23 [logger.py:42] Received request cmpl-25551e7d1dd04721bbcb549b3c8eecb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:23 [async_llm.py:261] Added request cmpl-25551e7d1dd04721bbcb549b3c8eecb3-0.
INFO 03-02 01:09:24 [logger.py:42] Received request cmpl-571c6e330b534989bebc58eedbd2b002-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:24 [async_llm.py:261] Added request cmpl-571c6e330b534989bebc58eedbd2b002-0.
INFO 03-02 01:09:25 [logger.py:42] Received request cmpl-e342669b1cf04545aac0e21cd08fb974-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:25 [async_llm.py:261] Added request cmpl-e342669b1cf04545aac0e21cd08fb974-0.
INFO 03-02 01:09:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:09:26 [logger.py:42] Received request cmpl-f4cfcaa1ff1749bcaf725078ad4d1268-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:26 [async_llm.py:261] Added request cmpl-f4cfcaa1ff1749bcaf725078ad4d1268-0.
INFO 03-02 01:09:27 [logger.py:42] Received request cmpl-2cf6c445d2804ad2ad8887651142d22f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:27 [async_llm.py:261] Added request cmpl-2cf6c445d2804ad2ad8887651142d22f-0.
INFO 03-02 01:09:28 [logger.py:42] Received request cmpl-d8afb143c3054448882f064c7336c7b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:28 [async_llm.py:261] Added request cmpl-d8afb143c3054448882f064c7336c7b9-0.
INFO 03-02 01:09:30 [logger.py:42] Received request cmpl-64be54fb5f654d089f614d6be9a7e5c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:30 [async_llm.py:261] Added request cmpl-64be54fb5f654d089f614d6be9a7e5c3-0.
INFO 03-02 01:09:31 [logger.py:42] Received request cmpl-56f18ead61284d77877bfc4f65f716eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:31 [async_llm.py:261] Added request cmpl-56f18ead61284d77877bfc4f65f716eb-0.
INFO 03-02 01:09:32 [logger.py:42] Received request cmpl-84cc22bb8b054d409d7e56eda031757c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:32 [async_llm.py:261] Added request cmpl-84cc22bb8b054d409d7e56eda031757c-0.
INFO 03-02 01:09:33 [logger.py:42] Received request cmpl-ead8f6acb15c44d78c4caaff15173a73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:33 [async_llm.py:261] Added request cmpl-ead8f6acb15c44d78c4caaff15173a73-0.
INFO 03-02 01:09:34 [logger.py:42] Received request cmpl-661523d880b640c7870f33c3ace057a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:34 [async_llm.py:261] Added request cmpl-661523d880b640c7870f33c3ace057a3-0.
INFO 03-02 01:09:35 [logger.py:42] Received request cmpl-680f98a03b234efe9f5e73c462b61a5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:35 [async_llm.py:261] Added request cmpl-680f98a03b234efe9f5e73c462b61a5d-0.
INFO 03-02 01:09:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:09:36 [logger.py:42] Received request cmpl-4a4e513de67846d080540727dac8da19-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:36 [async_llm.py:261] Added request cmpl-4a4e513de67846d080540727dac8da19-0.
INFO 03-02 01:09:37 [logger.py:42] Received request cmpl-224b826d62704e90acc156a6a26797be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:37 [async_llm.py:261] Added request cmpl-224b826d62704e90acc156a6a26797be-0.
INFO 03-02 01:09:38 [logger.py:42] Received request cmpl-c51ac92cceb14e81bb6bb9d5088bae35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:38 [async_llm.py:261] Added request cmpl-c51ac92cceb14e81bb6bb9d5088bae35-0.
INFO 03-02 01:09:39 [logger.py:42] Received request cmpl-49fdf2a1034d47b6bad053563919470c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:39 [async_llm.py:261] Added request cmpl-49fdf2a1034d47b6bad053563919470c-0.
INFO 03-02 01:09:40 [logger.py:42] Received request cmpl-80c090103b1a465baa8e657c318c727d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:40 [async_llm.py:261] Added request cmpl-80c090103b1a465baa8e657c318c727d-0.
INFO 03-02 01:09:41 [logger.py:42] Received request cmpl-e28c0ed8992049a6a69e1ef3426ecc23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:41 [async_llm.py:261] Added request cmpl-e28c0ed8992049a6a69e1ef3426ecc23-0.
INFO 03-02 01:09:43 [logger.py:42] Received request cmpl-77c58b21428a4e949941e8c727ff9b64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:43 [async_llm.py:261] Added request cmpl-77c58b21428a4e949941e8c727ff9b64-0.
INFO 03-02 01:09:44 [logger.py:42] Received request cmpl-c5b4f71898824fe4b8ff4ffe1e84f96c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:44 [async_llm.py:261] Added request cmpl-c5b4f71898824fe4b8ff4ffe1e84f96c-0.
INFO 03-02 01:09:45 [logger.py:42] Received request cmpl-ee670a09ced64c8f9f2f52b80fb1d7a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:45 [async_llm.py:261] Added request cmpl-ee670a09ced64c8f9f2f52b80fb1d7a5-0.
INFO 03-02 01:09:46 [logger.py:42] Received request cmpl-e32dbec693024527ab9711aa525dc3e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:46 [async_llm.py:261] Added request cmpl-e32dbec693024527ab9711aa525dc3e3-0.
INFO 03-02 01:09:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:09:47 [logger.py:42] Received request cmpl-282db57081ea47108786c76bebb76942-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:47 [async_llm.py:261] Added request cmpl-282db57081ea47108786c76bebb76942-0.
INFO 03-02 01:09:48 [logger.py:42] Received request cmpl-fc939a82650b421c8079f6bdda135428-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:48 [async_llm.py:261] Added request cmpl-fc939a82650b421c8079f6bdda135428-0.
INFO 03-02 01:09:49 [logger.py:42] Received request cmpl-34e8e9edfebe43208bd7eb84a3b144e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:49 [async_llm.py:261] Added request cmpl-34e8e9edfebe43208bd7eb84a3b144e0-0.
INFO 03-02 01:09:50 [logger.py:42] Received request cmpl-35bd07bc422e48439e813e08f87f2cb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:50 [async_llm.py:261] Added request cmpl-35bd07bc422e48439e813e08f87f2cb9-0.
INFO 03-02 01:09:51 [logger.py:42] Received request cmpl-a4f2ea4bba24415bb965dc5ef973c6aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:51 [async_llm.py:261] Added request cmpl-a4f2ea4bba24415bb965dc5ef973c6aa-0.
INFO 03-02 01:09:52 [logger.py:42] Received request cmpl-2b8218146cd94665926c1ee6826a9d99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:52 [async_llm.py:261] Added request cmpl-2b8218146cd94665926c1ee6826a9d99-0.
INFO 03-02 01:09:53 [logger.py:42] Received request cmpl-3df9003adcf74ca5b220ccbb8ca65b0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:53 [async_llm.py:261] Added request cmpl-3df9003adcf74ca5b220ccbb8ca65b0b-0.
INFO 03-02 01:09:55 [logger.py:42] Received request cmpl-94aec3a1c6134c6bbfaf81721fbb0aa7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:55 [async_llm.py:261] Added request cmpl-94aec3a1c6134c6bbfaf81721fbb0aa7-0.
INFO 03-02 01:09:56 [logger.py:42] Received request cmpl-52e04d486d4b4ac49c99b21c60a5851a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:56 [async_llm.py:261] Added request cmpl-52e04d486d4b4ac49c99b21c60a5851a-0.
INFO 03-02 01:09:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:09:57 [logger.py:42] Received request cmpl-bef64808767e4fbbb9639d17bab1af50-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:57 [async_llm.py:261] Added request cmpl-bef64808767e4fbbb9639d17bab1af50-0.
INFO 03-02 01:09:58 [logger.py:42] Received request cmpl-f092d9ce8c304a45a833bdc398a0ab24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:58 [async_llm.py:261] Added request cmpl-f092d9ce8c304a45a833bdc398a0ab24-0.
INFO 03-02 01:09:59 [logger.py:42] Received request cmpl-e664c4ead8e242acb84a0841a6b48660-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:59 [async_llm.py:261] Added request cmpl-e664c4ead8e242acb84a0841a6b48660-0.
INFO 03-02 01:10:00 [logger.py:42] Received request cmpl-70f0659b64664b21ada7d3332995360a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:00 [async_llm.py:261] Added request cmpl-70f0659b64664b21ada7d3332995360a-0.
INFO 03-02 01:10:01 [logger.py:42] Received request cmpl-d546c43d461e42aaa2a2c13ae1c6c386-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:01 [async_llm.py:261] Added request cmpl-d546c43d461e42aaa2a2c13ae1c6c386-0.
INFO 03-02 01:10:02 [logger.py:42] Received request cmpl-75a665eff4de4fa6a40ca21fc23af2ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:02 [async_llm.py:261] Added request cmpl-75a665eff4de4fa6a40ca21fc23af2ef-0.
INFO 03-02 01:10:03 [logger.py:42] Received request cmpl-4b1f7fce201243ed9f4cd0189175cfe9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:03 [async_llm.py:261] Added request cmpl-4b1f7fce201243ed9f4cd0189175cfe9-0.
INFO 03-02 01:10:04 [logger.py:42] Received request cmpl-c7126a8a59f945da98dbe8c1753b6283-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:04 [async_llm.py:261] Added request cmpl-c7126a8a59f945da98dbe8c1753b6283-0.
INFO 03-02 01:10:05 [logger.py:42] Received request cmpl-ca96fcb613de4fe1bd536613cf1f1511-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:05 [async_llm.py:261] Added request cmpl-ca96fcb613de4fe1bd536613cf1f1511-0.
INFO 03-02 01:10:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:10:06 [logger.py:42] Received request cmpl-9a84e955d86749bfb6644d111fad725d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:06 [async_llm.py:261] Added request cmpl-9a84e955d86749bfb6644d111fad725d-0.
INFO 03-02 01:10:08 [logger.py:42] Received request cmpl-6e676ec037f74a7695dc1f06b07d2e2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:08 [async_llm.py:261] Added request cmpl-6e676ec037f74a7695dc1f06b07d2e2c-0.
INFO 03-02 01:10:09 [logger.py:42] Received request cmpl-2e9ff49a54714c7795bd513d729850b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:09 [async_llm.py:261] Added request cmpl-2e9ff49a54714c7795bd513d729850b9-0.
INFO 03-02 01:10:10 [logger.py:42] Received request cmpl-9fad22874403457e9faae4169a191f67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:10 [async_llm.py:261] Added request cmpl-9fad22874403457e9faae4169a191f67-0.
INFO 03-02 01:10:11 [logger.py:42] Received request cmpl-183a6dbe3c084af9b9f7c6b89d2cb79e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:11 [async_llm.py:261] Added request cmpl-183a6dbe3c084af9b9f7c6b89d2cb79e-0.
INFO 03-02 01:10:12 [logger.py:42] Received request cmpl-9a04a08290544a27b3e5ad34a395031c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:12 [async_llm.py:261] Added request cmpl-9a04a08290544a27b3e5ad34a395031c-0.
INFO 03-02 01:10:13 [logger.py:42] Received request cmpl-a6fe45d24df5446e9d98858c263642e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:13 [async_llm.py:261] Added request cmpl-a6fe45d24df5446e9d98858c263642e6-0.
INFO 03-02 01:10:14 [logger.py:42] Received request cmpl-e83e30c9464149df9efd31948f08bb01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:14 [async_llm.py:261] Added request cmpl-e83e30c9464149df9efd31948f08bb01-0.
INFO 03-02 01:10:15 [logger.py:42] Received request cmpl-cdabdba52161419b92d5bb0af3422eab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:15 [async_llm.py:261] Added request cmpl-cdabdba52161419b92d5bb0af3422eab-0.
INFO 03-02 01:10:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:10:16 [logger.py:42] Received request cmpl-43f1493f1d7a4641b3223260b4a0c719-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:16 [async_llm.py:261] Added request cmpl-43f1493f1d7a4641b3223260b4a0c719-0.
INFO 03-02 01:10:17 [logger.py:42] Received request cmpl-623a1d9f70c34a328771464df9bfec7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:17 [async_llm.py:261] Added request cmpl-623a1d9f70c34a328771464df9bfec7b-0.
INFO 03-02 01:10:18 [logger.py:42] Received request cmpl-aaa2d9da5e94448484f9978fd83999cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:18 [async_llm.py:261] Added request cmpl-aaa2d9da5e94448484f9978fd83999cd-0.
INFO 03-02 01:10:19 [logger.py:42] Received request cmpl-70ad7c9a15ba4b0aa46dd31036690a92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:19 [async_llm.py:261] Added request cmpl-70ad7c9a15ba4b0aa46dd31036690a92-0.
INFO 03-02 01:10:21 [logger.py:42] Received request cmpl-6b05c01dc62e46a6abf82e27e0a4208e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:21 [async_llm.py:261] Added request cmpl-6b05c01dc62e46a6abf82e27e0a4208e-0.
INFO 03-02 01:10:22 [logger.py:42] Received request cmpl-c821aca9737b489aad3447c1161b7785-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:22 [async_llm.py:261] Added request cmpl-c821aca9737b489aad3447c1161b7785-0.
INFO 03-02 01:10:23 [logger.py:42] Received request cmpl-8149b981457b4d9eb55be62532b8da7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:23 [async_llm.py:261] Added request cmpl-8149b981457b4d9eb55be62532b8da7d-0.
INFO 03-02 01:10:24 [logger.py:42] Received request cmpl-f46b3c11943944ce99edfebaa455ad5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:24 [async_llm.py:261] Added request cmpl-f46b3c11943944ce99edfebaa455ad5f-0.
INFO 03-02 01:10:25 [logger.py:42] Received request cmpl-cd40c78c79734d129b553c128514f847-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:25 [async_llm.py:261] Added request cmpl-cd40c78c79734d129b553c128514f847-0.
INFO 03-02 01:10:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:10:26 [logger.py:42] Received request cmpl-b3b10393196e4884bb60395af0eed8c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:26 [async_llm.py:261] Added request cmpl-b3b10393196e4884bb60395af0eed8c8-0.
INFO 03-02 01:10:27 [logger.py:42] Received request cmpl-3e40579267be4cdd91bbac7bd8e735cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:27 [async_llm.py:261] Added request cmpl-3e40579267be4cdd91bbac7bd8e735cf-0.
INFO 03-02 01:10:28 [logger.py:42] Received request cmpl-027a0f96415c465db8e8a29c64b7e976-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:28 [async_llm.py:261] Added request cmpl-027a0f96415c465db8e8a29c64b7e976-0.
INFO 03-02 01:10:29 [logger.py:42] Received request cmpl-5a79939e3dfc4c35b3acf9fadd944184-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:29 [async_llm.py:261] Added request cmpl-5a79939e3dfc4c35b3acf9fadd944184-0.
INFO 03-02 01:10:30 [logger.py:42] Received request cmpl-9e9c0e668f2f45c2ac0d827d11f9ea4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:30 [async_llm.py:261] Added request cmpl-9e9c0e668f2f45c2ac0d827d11f9ea4e-0.
INFO 03-02 01:10:31 [logger.py:42] Received request cmpl-56aa1d5ea8484f2fb67eacebcab163a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:31 [async_llm.py:261] Added request cmpl-56aa1d5ea8484f2fb67eacebcab163a2-0.
INFO 03-02 01:10:32 [logger.py:42] Received request cmpl-21aa3b056aca4158a3c24275e4e4ae10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:32 [async_llm.py:261] Added request cmpl-21aa3b056aca4158a3c24275e4e4ae10-0.
INFO 03-02 01:10:34 [logger.py:42] Received request cmpl-0062304359884727948e43f1bd9920de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:34 [async_llm.py:261] Added request cmpl-0062304359884727948e43f1bd9920de-0.
INFO 03-02 01:10:35 [logger.py:42] Received request cmpl-a3ecd5292bef45cdb58dd25a39ebfb0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:35 [async_llm.py:261] Added request cmpl-a3ecd5292bef45cdb58dd25a39ebfb0d-0.
INFO 03-02 01:10:36 [logger.py:42] Received request cmpl-262e0cf4669648cf8f11912cb793b665-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:36 [async_llm.py:261] Added request cmpl-262e0cf4669648cf8f11912cb793b665-0.
INFO 03-02 01:10:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:10:37 [logger.py:42] Received request cmpl-ff9f70d195cf446faf4da0f082ed63c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:37 [async_llm.py:261] Added request cmpl-ff9f70d195cf446faf4da0f082ed63c9-0.
INFO 03-02 01:10:38 [logger.py:42] Received request cmpl-fcbb509d19fb48cd9d806b180e4643ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:38 [async_llm.py:261] Added request cmpl-fcbb509d19fb48cd9d806b180e4643ed-0.
INFO 03-02 01:10:39 [logger.py:42] Received request cmpl-3419bd591efd4618aaea19dd07b034c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:39 [async_llm.py:261] Added request cmpl-3419bd591efd4618aaea19dd07b034c6-0.
INFO 03-02 01:10:40 [logger.py:42] Received request cmpl-3778d5bdd30b4b18a09952be52871308-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:40 [async_llm.py:261] Added request cmpl-3778d5bdd30b4b18a09952be52871308-0.
INFO 03-02 01:10:41 [logger.py:42] Received request cmpl-4c6f3ae9a0444fcc826c3281dedb1451-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:41 [async_llm.py:261] Added request cmpl-4c6f3ae9a0444fcc826c3281dedb1451-0.
INFO 03-02 01:10:42 [logger.py:42] Received request cmpl-b9722537057b4d84856d75ea1bc602eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:42 [async_llm.py:261] Added request cmpl-b9722537057b4d84856d75ea1bc602eb-0.
INFO 03-02 01:10:43 [logger.py:42] Received request cmpl-bbba2bb8fc684e31ad2d42d06ab59a70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:43 [async_llm.py:261] Added request cmpl-bbba2bb8fc684e31ad2d42d06ab59a70-0.
INFO 03-02 01:10:44 [logger.py:42] Received request cmpl-ef5c7c16873c47ac82f037ecb284f841-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:44 [async_llm.py:261] Added request cmpl-ef5c7c16873c47ac82f037ecb284f841-0.
INFO 03-02 01:10:45 [logger.py:42] Received request cmpl-5e16062149844ac49e4e3c9360764e4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:45 [async_llm.py:261] Added request cmpl-5e16062149844ac49e4e3c9360764e4c-0.
INFO 03-02 01:10:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:10:47 [logger.py:42] Received request cmpl-e5a103a8d89543ef89afbcb5c1d2c62d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:47 [async_llm.py:261] Added request cmpl-e5a103a8d89543ef89afbcb5c1d2c62d-0.
INFO 03-02 01:10:48 [logger.py:42] Received request cmpl-9f85b449734a4d078bfe8ef89dc44d25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:48 [async_llm.py:261] Added request cmpl-9f85b449734a4d078bfe8ef89dc44d25-0.
INFO 03-02 01:10:49 [logger.py:42] Received request cmpl-5408dd4f9f554be1af35f9d46069797a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:49 [async_llm.py:261] Added request cmpl-5408dd4f9f554be1af35f9d46069797a-0.
INFO 03-02 01:10:50 [logger.py:42] Received request cmpl-d592d4f46ef94a66975ed2b6d6b9e598-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:50 [async_llm.py:261] Added request cmpl-d592d4f46ef94a66975ed2b6d6b9e598-0.
INFO 03-02 01:10:51 [logger.py:42] Received request cmpl-9f2f81da75d948fdb9b0cf919be61c5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:51 [async_llm.py:261] Added request cmpl-9f2f81da75d948fdb9b0cf919be61c5e-0.
INFO 03-02 01:10:52 [logger.py:42] Received request cmpl-b7e98598079246888789c6eac922f6b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:52 [async_llm.py:261] Added request cmpl-b7e98598079246888789c6eac922f6b5-0.
INFO 03-02 01:10:53 [logger.py:42] Received request cmpl-ef19766e4f544ea296a5a113ccab4089-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:53 [async_llm.py:261] Added request cmpl-ef19766e4f544ea296a5a113ccab4089-0.
INFO 03-02 01:10:54 [logger.py:42] Received request cmpl-a59421fde0044c30b67ac7e25a1ff637-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:54 [async_llm.py:261] Added request cmpl-a59421fde0044c30b67ac7e25a1ff637-0.
INFO 03-02 01:10:55 [logger.py:42] Received request cmpl-d7833714c33749e18839cbb557eb4946-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:55 [async_llm.py:261] Added request cmpl-d7833714c33749e18839cbb557eb4946-0.
INFO 03-02 01:10:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:10:56 [logger.py:42] Received request cmpl-f39eba7f22c443aabe90e52d531d4272-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:56 [async_llm.py:261] Added request cmpl-f39eba7f22c443aabe90e52d531d4272-0.
INFO 03-02 01:10:57 [logger.py:42] Received request cmpl-d3473414337c4a4fa6a98f003720379d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:57 [async_llm.py:261] Added request cmpl-d3473414337c4a4fa6a98f003720379d-0.
INFO 03-02 01:10:58 [logger.py:42] Received request cmpl-83068e21c7d1484a8aeab49b741bfef8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:58 [async_llm.py:261] Added request cmpl-83068e21c7d1484a8aeab49b741bfef8-0.
INFO 03-02 01:11:00 [logger.py:42] Received request cmpl-9ce84bc558c24816b6929eea3862f73c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:00 [async_llm.py:261] Added request cmpl-9ce84bc558c24816b6929eea3862f73c-0.
INFO 03-02 01:11:01 [logger.py:42] Received request cmpl-5e3a61a6e63641b3b846c63d08f84085-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:01 [async_llm.py:261] Added request cmpl-5e3a61a6e63641b3b846c63d08f84085-0.
INFO 03-02 01:11:02 [logger.py:42] Received request cmpl-89923f6424c949fdad41b5ad0af13c79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:02 [async_llm.py:261] Added request cmpl-89923f6424c949fdad41b5ad0af13c79-0.
INFO 03-02 01:11:03 [logger.py:42] Received request cmpl-4c5bd0feec364c9bb56e91d0b01bef52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:03 [async_llm.py:261] Added request cmpl-4c5bd0feec364c9bb56e91d0b01bef52-0.
INFO 03-02 01:11:04 [logger.py:42] Received request cmpl-ee2701fe2fb44c19993e7faa0599d4c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:04 [async_llm.py:261] Added request cmpl-ee2701fe2fb44c19993e7faa0599d4c3-0.
INFO 03-02 01:11:05 [logger.py:42] Received request cmpl-04d07f59ed63428fa2ae51d4851df79b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:05 [async_llm.py:261] Added request cmpl-04d07f59ed63428fa2ae51d4851df79b-0.
INFO 03-02 01:11:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:11:06 [logger.py:42] Received request cmpl-f2017ce71bf14b2389d02289a1d29940-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:06 [async_llm.py:261] Added request cmpl-f2017ce71bf14b2389d02289a1d29940-0.
INFO 03-02 01:11:07 [logger.py:42] Received request cmpl-6a4e85732fea49deb39d13e6d1331005-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:07 [async_llm.py:261] Added request cmpl-6a4e85732fea49deb39d13e6d1331005-0.
INFO 03-02 01:11:08 [logger.py:42] Received request cmpl-bb75dcfc7b7c420c924dfeeeae51ff1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:08 [async_llm.py:261] Added request cmpl-bb75dcfc7b7c420c924dfeeeae51ff1e-0.
INFO 03-02 01:11:09 [logger.py:42] Received request cmpl-97cb03a110a94b33aa7e0d178cbb50f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:09 [async_llm.py:261] Added request cmpl-97cb03a110a94b33aa7e0d178cbb50f8-0.
INFO 03-02 01:11:10 [logger.py:42] Received request cmpl-bde6bbcfa3ac49c4be1787d4099606ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:10 [async_llm.py:261] Added request cmpl-bde6bbcfa3ac49c4be1787d4099606ef-0.
INFO 03-02 01:11:11 [logger.py:42] Received request cmpl-38868f71e1fb4d6f839df70d88d2e90b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:11 [async_llm.py:261] Added request cmpl-38868f71e1fb4d6f839df70d88d2e90b-0.
INFO 03-02 01:11:13 [logger.py:42] Received request cmpl-9e1335943c054fadb41383e090a34200-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:13 [async_llm.py:261] Added request cmpl-9e1335943c054fadb41383e090a34200-0.
INFO 03-02 01:11:14 [logger.py:42] Received request cmpl-58d7c1befc774c21a294edde6b2076b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:14 [async_llm.py:261] Added request cmpl-58d7c1befc774c21a294edde6b2076b4-0.
INFO 03-02 01:11:15 [logger.py:42] Received request cmpl-0a0bd1ff20ee439e9765ae28b7e2b20d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:15 [async_llm.py:261] Added request cmpl-0a0bd1ff20ee439e9765ae28b7e2b20d-0.
INFO 03-02 01:11:16 [logger.py:42] Received request cmpl-a352fa51782f4ffd94348c437991db1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:16 [async_llm.py:261] Added request cmpl-a352fa51782f4ffd94348c437991db1d-0.
INFO 03-02 01:11:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:11:17 [logger.py:42] Received request cmpl-bff2e3bf2b4b4382b21a7c3024997c8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:17 [async_llm.py:261] Added request cmpl-bff2e3bf2b4b4382b21a7c3024997c8f-0.
INFO 03-02 01:11:18 [logger.py:42] Received request cmpl-2f99c9188bcf474884c016a1952a67ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:18 [async_llm.py:261] Added request cmpl-2f99c9188bcf474884c016a1952a67ff-0.
INFO 03-02 01:11:19 [logger.py:42] Received request cmpl-9155409b910843be887e09df7ba95168-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:19 [async_llm.py:261] Added request cmpl-9155409b910843be887e09df7ba95168-0.
INFO 03-02 01:11:20 [logger.py:42] Received request cmpl-4545f92a84a84e469b6a3782a3e5a269-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:20 [async_llm.py:261] Added request cmpl-4545f92a84a84e469b6a3782a3e5a269-0.
INFO 03-02 01:11:21 [logger.py:42] Received request cmpl-eb6276a96b71440fa9fd9f5b24d64047-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:21 [async_llm.py:261] Added request cmpl-eb6276a96b71440fa9fd9f5b24d64047-0.
INFO 03-02 01:11:22 [logger.py:42] Received request cmpl-f88c8a625e35476dbfa7e707527e33d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:22 [async_llm.py:261] Added request cmpl-f88c8a625e35476dbfa7e707527e33d8-0.
INFO 03-02 01:11:23 [logger.py:42] Received request cmpl-ebc820e664e341de9d9390ffdd371996-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:23 [async_llm.py:261] Added request cmpl-ebc820e664e341de9d9390ffdd371996-0.
INFO 03-02 01:11:25 [logger.py:42] Received request cmpl-63089a6bb51647ed8ab94e3e942f93b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:25 [async_llm.py:261] Added request cmpl-63089a6bb51647ed8ab94e3e942f93b8-0.
INFO 03-02 01:11:26 [logger.py:42] Received request cmpl-da97d267056d49d88d7087202abd17c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:26 [async_llm.py:261] Added request cmpl-da97d267056d49d88d7087202abd17c7-0.
INFO 03-02 01:11:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:11:27 [logger.py:42] Received request cmpl-ebd08de1486f4ed79896586dc6bb7e45-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:27 [async_llm.py:261] Added request cmpl-ebd08de1486f4ed79896586dc6bb7e45-0.
INFO 03-02 01:11:28 [logger.py:42] Received request cmpl-9c520bb749a7423791631cd35531156a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:28 [async_llm.py:261] Added request cmpl-9c520bb749a7423791631cd35531156a-0.
INFO 03-02 01:11:29 [logger.py:42] Received request cmpl-b1cfb9746c864db5ad02a5d64f235171-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:29 [async_llm.py:261] Added request cmpl-b1cfb9746c864db5ad02a5d64f235171-0.
INFO 03-02 01:11:30 [logger.py:42] Received request cmpl-68862136ca864023a91abf6ff5bfd7c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:30 [async_llm.py:261] Added request cmpl-68862136ca864023a91abf6ff5bfd7c6-0.
INFO 03-02 01:11:31 [logger.py:42] Received request cmpl-30e79b7fd91a4820ae3fd9b5f2169b4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:31 [async_llm.py:261] Added request cmpl-30e79b7fd91a4820ae3fd9b5f2169b4f-0.
INFO 03-02 01:11:32 [logger.py:42] Received request cmpl-d6c473daacce4ab19541a137c331187a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:32 [async_llm.py:261] Added request cmpl-d6c473daacce4ab19541a137c331187a-0.
INFO 03-02 01:11:33 [logger.py:42] Received request cmpl-d7f60f1220c240bdb9b58b3a35e14933-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:33 [async_llm.py:261] Added request cmpl-d7f60f1220c240bdb9b58b3a35e14933-0.
INFO 03-02 01:11:34 [logger.py:42] Received request cmpl-e45d11f8b6a04acc9b9abbfc62f7bc33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:34 [async_llm.py:261] Added request cmpl-e45d11f8b6a04acc9b9abbfc62f7bc33-0.
INFO 03-02 01:11:35 [logger.py:42] Received request cmpl-62d06ecce58e48c19bb6ceb289a67d96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:35 [async_llm.py:261] Added request cmpl-62d06ecce58e48c19bb6ceb289a67d96-0.
INFO 03-02 01:11:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:11:36 [logger.py:42] Received request cmpl-f584cdb1ac82461891fac7b6ece93247-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:36 [async_llm.py:261] Added request cmpl-f584cdb1ac82461891fac7b6ece93247-0.
INFO 03-02 01:11:38 [logger.py:42] Received request cmpl-21ce8283c62048b78c40cce02e254944-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:38 [async_llm.py:261] Added request cmpl-21ce8283c62048b78c40cce02e254944-0.
INFO 03-02 01:11:39 [logger.py:42] Received request cmpl-b14c2dd1ee6a4280946de4fabfa795b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:39 [async_llm.py:261] Added request cmpl-b14c2dd1ee6a4280946de4fabfa795b7-0.
INFO 03-02 01:11:40 [logger.py:42] Received request cmpl-180739c770394fd8b733be19881c5670-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:40 [async_llm.py:261] Added request cmpl-180739c770394fd8b733be19881c5670-0.
INFO 03-02 01:11:41 [logger.py:42] Received request cmpl-06c06ccbaca54a0a8c8efc5cda13f144-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:41 [async_llm.py:261] Added request cmpl-06c06ccbaca54a0a8c8efc5cda13f144-0.
INFO 03-02 01:11:42 [logger.py:42] Received request cmpl-a788a8e647d540d59fc452e2e2369502-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:42 [async_llm.py:261] Added request cmpl-a788a8e647d540d59fc452e2e2369502-0.
INFO 03-02 01:11:43 [logger.py:42] Received request cmpl-f0194a45b04e45a398e1c483aad34a79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:43 [async_llm.py:261] Added request cmpl-f0194a45b04e45a398e1c483aad34a79-0.
INFO 03-02 01:11:44 [logger.py:42] Received request cmpl-e4f55c1ea5b04c24ac710a3404950d54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:44 [async_llm.py:261] Added request cmpl-e4f55c1ea5b04c24ac710a3404950d54-0.
INFO 03-02 01:11:45 [logger.py:42] Received request cmpl-a6cc7a3ed1234bd8b1ac5df7fc300866-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:45 [async_llm.py:261] Added request cmpl-a6cc7a3ed1234bd8b1ac5df7fc300866-0.
INFO 03-02 01:11:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:11:46 [logger.py:42] Received request cmpl-43420c56fba942c7b1c4ba18680a775f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:46 [async_llm.py:261] Added request cmpl-43420c56fba942c7b1c4ba18680a775f-0.
INFO 03-02 01:11:47 [logger.py:42] Received request cmpl-a5e936f7fc6c4a389be5e49732814667-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:47 [async_llm.py:261] Added request cmpl-a5e936f7fc6c4a389be5e49732814667-0.
INFO 03-02 01:11:48 [logger.py:42] Received request cmpl-429f27958c02400494ba55d29f327d90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:48 [async_llm.py:261] Added request cmpl-429f27958c02400494ba55d29f327d90-0.
INFO 03-02 01:11:49 [logger.py:42] Received request cmpl-33f3aa7493854113bbe609ed254f5b0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:49 [async_llm.py:261] Added request cmpl-33f3aa7493854113bbe609ed254f5b0e-0.
INFO 03-02 01:11:51 [logger.py:42] Received request cmpl-bd7d814618b1474a9fe0e08fba8321e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:51 [async_llm.py:261] Added request cmpl-bd7d814618b1474a9fe0e08fba8321e1-0.
INFO 03-02 01:11:52 [logger.py:42] Received request cmpl-b3afc576d285415a92a4e40df69e3dbc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:52 [async_llm.py:261] Added request cmpl-b3afc576d285415a92a4e40df69e3dbc-0.
INFO 03-02 01:11:53 [logger.py:42] Received request cmpl-089d16dec555468ebf549f5811a9244e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:53 [async_llm.py:261] Added request cmpl-089d16dec555468ebf549f5811a9244e-0.
INFO 03-02 01:11:54 [logger.py:42] Received request cmpl-2975a881af554065b575e5064f467dc6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:54 [async_llm.py:261] Added request cmpl-2975a881af554065b575e5064f467dc6-0.
INFO 03-02 01:11:55 [logger.py:42] Received request cmpl-357cf91580ee4d29a36ee523e164254b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:55 [async_llm.py:261] Added request cmpl-357cf91580ee4d29a36ee523e164254b-0.
INFO 03-02 01:11:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:11:56 [logger.py:42] Received request cmpl-613b5536ac03422fb26be92821413aa4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:56 [async_llm.py:261] Added request cmpl-613b5536ac03422fb26be92821413aa4-0.
INFO 03-02 01:11:57 [logger.py:42] Received request cmpl-be077f42e63d48dfb0158ca87c18018b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:57 [async_llm.py:261] Added request cmpl-be077f42e63d48dfb0158ca87c18018b-0.
INFO 03-02 01:11:58 [logger.py:42] Received request cmpl-a741d438743f4f8887b311187e652eb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:58 [async_llm.py:261] Added request cmpl-a741d438743f4f8887b311187e652eb9-0.
INFO 03-02 01:11:59 [logger.py:42] Received request cmpl-b31172ec6ef74f88914d198c375cd437-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:59 [async_llm.py:261] Added request cmpl-b31172ec6ef74f88914d198c375cd437-0.
INFO 03-02 01:12:00 [logger.py:42] Received request cmpl-bfdfd2d2e9c742d29358cb7da24ca764-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:00 [async_llm.py:261] Added request cmpl-bfdfd2d2e9c742d29358cb7da24ca764-0.
INFO 03-02 01:12:01 [logger.py:42] Received request cmpl-08925308688f4860836610cba6502864-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:01 [async_llm.py:261] Added request cmpl-08925308688f4860836610cba6502864-0.
INFO 03-02 01:12:02 [logger.py:42] Received request cmpl-9edf7596b54c4c4eabd51f64e4224148-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:02 [async_llm.py:261] Added request cmpl-9edf7596b54c4c4eabd51f64e4224148-0.
INFO 03-02 01:12:04 [logger.py:42] Received request cmpl-72e02fbde4614f619511e89108449a8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:04 [async_llm.py:261] Added request cmpl-72e02fbde4614f619511e89108449a8f-0.
INFO 03-02 01:12:05 [logger.py:42] Received request cmpl-06ff1f2366364f9db83cfb387c79421d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:05 [async_llm.py:261] Added request cmpl-06ff1f2366364f9db83cfb387c79421d-0.
INFO 03-02 01:12:06 [logger.py:42] Received request cmpl-0d2583610bef49eeb3c4c6414697adff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:06 [async_llm.py:261] Added request cmpl-0d2583610bef49eeb3c4c6414697adff-0.
INFO 03-02 01:12:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:12:07 [logger.py:42] Received request cmpl-1898646b9bbc497685c5b93ff30457f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:07 [async_llm.py:261] Added request cmpl-1898646b9bbc497685c5b93ff30457f0-0.
INFO 03-02 01:12:08 [logger.py:42] Received request cmpl-7c74ecc6d4a1461b9a19c9cec3f2d2e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:08 [async_llm.py:261] Added request cmpl-7c74ecc6d4a1461b9a19c9cec3f2d2e6-0.
INFO 03-02 01:12:09 [logger.py:42] Received request cmpl-70bfedf77dd44121b5bc93c80359ed14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:09 [async_llm.py:261] Added request cmpl-70bfedf77dd44121b5bc93c80359ed14-0.
INFO 03-02 01:12:10 [logger.py:42] Received request cmpl-5713515689ac49fe8bb70ce6b2250075-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:10 [async_llm.py:261] Added request cmpl-5713515689ac49fe8bb70ce6b2250075-0.
INFO 03-02 01:12:11 [logger.py:42] Received request cmpl-8c68b4a5c28b4b41a7b1eb650d93acb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:11 [async_llm.py:261] Added request cmpl-8c68b4a5c28b4b41a7b1eb650d93acb3-0.
INFO 03-02 01:12:12 [logger.py:42] Received request cmpl-9d85c9468f07479ea4f32d2d47c73fa2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:12 [async_llm.py:261] Added request cmpl-9d85c9468f07479ea4f32d2d47c73fa2-0.
INFO 03-02 01:12:13 [logger.py:42] Received request cmpl-1fd1e3eb1b2a440a90c179e87b6452cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:13 [async_llm.py:261] Added request cmpl-1fd1e3eb1b2a440a90c179e87b6452cd-0.
INFO 03-02 01:12:14 [logger.py:42] Received request cmpl-88525c2bc1814d92a43be4f0cb2eaa6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:14 [async_llm.py:261] Added request cmpl-88525c2bc1814d92a43be4f0cb2eaa6d-0.
INFO 03-02 01:12:15 [logger.py:42] Received request cmpl-73febd9b32d5456994c9d7263fa3e692-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:15 [async_llm.py:261] Added request cmpl-73febd9b32d5456994c9d7263fa3e692-0.
INFO 03-02 01:12:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:12:17 [logger.py:42] Received request cmpl-0bfcff20764941d681a877057738597d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:17 [async_llm.py:261] Added request cmpl-0bfcff20764941d681a877057738597d-0.
INFO 03-02 01:12:18 [logger.py:42] Received request cmpl-8c0a971cbb5c479581794fe9f213dc9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:18 [async_llm.py:261] Added request cmpl-8c0a971cbb5c479581794fe9f213dc9b-0.
INFO 03-02 01:12:19 [logger.py:42] Received request cmpl-0785c000af784279bd035df6b51e9efd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:19 [async_llm.py:261] Added request cmpl-0785c000af784279bd035df6b51e9efd-0.
INFO 03-02 01:12:20 [logger.py:42] Received request cmpl-d8f831d34fc0462884a3c2fcb2dea7e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:20 [async_llm.py:261] Added request cmpl-d8f831d34fc0462884a3c2fcb2dea7e8-0.
INFO 03-02 01:12:21 [logger.py:42] Received request cmpl-349c705f1a614ea9a1394c8569742772-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:21 [async_llm.py:261] Added request cmpl-349c705f1a614ea9a1394c8569742772-0.
INFO 03-02 01:12:22 [logger.py:42] Received request cmpl-a6d6a42455c841c18c40bf4ab097fea9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:22 [async_llm.py:261] Added request cmpl-a6d6a42455c841c18c40bf4ab097fea9-0.
INFO 03-02 01:12:23 [logger.py:42] Received request cmpl-7ab8286d0edc4f3e90f7041079e677f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:23 [async_llm.py:261] Added request cmpl-7ab8286d0edc4f3e90f7041079e677f2-0.
INFO 03-02 01:12:24 [logger.py:42] Received request cmpl-1aae22f1d4c24636a23a48b536cbd3cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:24 [async_llm.py:261] Added request cmpl-1aae22f1d4c24636a23a48b536cbd3cd-0.
INFO 03-02 01:12:25 [logger.py:42] Received request cmpl-5ae1518dc4394181a8b3745759d6e5e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:25 [async_llm.py:261] Added request cmpl-5ae1518dc4394181a8b3745759d6e5e0-0.
INFO 03-02 01:12:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:12:26 [logger.py:42] Received request cmpl-79e8d6ab5f0e425f959f3d7003070443-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:26 [async_llm.py:261] Added request cmpl-79e8d6ab5f0e425f959f3d7003070443-0.
INFO 03-02 01:12:27 [logger.py:42] Received request cmpl-23a2f3244400406e8b8b4bd79689f825-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:27 [async_llm.py:261] Added request cmpl-23a2f3244400406e8b8b4bd79689f825-0.
INFO 03-02 01:12:28 [logger.py:42] Received request cmpl-a58b65a5364d4373b872152447d9e5c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:28 [async_llm.py:261] Added request cmpl-a58b65a5364d4373b872152447d9e5c4-0.
INFO 03-02 01:12:30 [logger.py:42] Received request cmpl-9068ece935ee4d95a8d573f564de0221-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:30 [async_llm.py:261] Added request cmpl-9068ece935ee4d95a8d573f564de0221-0.
INFO 03-02 01:12:31 [logger.py:42] Received request cmpl-7037844becee4650a319d710040753b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:31 [async_llm.py:261] Added request cmpl-7037844becee4650a319d710040753b2-0.
INFO 03-02 01:12:32 [logger.py:42] Received request cmpl-e14594323cc144a0a744907ddb14e7a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:32 [async_llm.py:261] Added request cmpl-e14594323cc144a0a744907ddb14e7a7-0.
INFO 03-02 01:12:33 [logger.py:42] Received request cmpl-720a836fbef44b9c98919833c7af9324-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:33 [async_llm.py:261] Added request cmpl-720a836fbef44b9c98919833c7af9324-0.
INFO 03-02 01:12:34 [logger.py:42] Received request cmpl-9aeae8f7be414a5f940f730bfa0278bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:34 [async_llm.py:261] Added request cmpl-9aeae8f7be414a5f940f730bfa0278bf-0.
INFO 03-02 01:12:35 [logger.py:42] Received request cmpl-8de38417698f4bc2aaafc36392105dfc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:35 [async_llm.py:261] Added request cmpl-8de38417698f4bc2aaafc36392105dfc-0.
INFO 03-02 01:12:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:12:36 [logger.py:42] Received request cmpl-123e3ec68cbd4aabbbb4caf293437cfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:36 [async_llm.py:261] Added request cmpl-123e3ec68cbd4aabbbb4caf293437cfa-0.
INFO 03-02 01:12:37 [logger.py:42] Received request cmpl-af3ec8fb8b714d41b5a417ea8e2cc40e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:37 [async_llm.py:261] Added request cmpl-af3ec8fb8b714d41b5a417ea8e2cc40e-0.
INFO 03-02 01:12:38 [logger.py:42] Received request cmpl-308aa34a4bc64f70833acbe7906ad22f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:38 [async_llm.py:261] Added request cmpl-308aa34a4bc64f70833acbe7906ad22f-0.
INFO 03-02 01:12:39 [logger.py:42] Received request cmpl-1c8416e2234948cf8d3ba2316ed2bab0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:39 [async_llm.py:261] Added request cmpl-1c8416e2234948cf8d3ba2316ed2bab0-0.
INFO 03-02 01:12:40 [logger.py:42] Received request cmpl-36c285a788114176994dc9b52d7f8a5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:40 [async_llm.py:261] Added request cmpl-36c285a788114176994dc9b52d7f8a5b-0.
INFO 03-02 01:12:41 [logger.py:42] Received request cmpl-fc604f27aabb4cfca9febd3ebd170b26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:41 [async_llm.py:261] Added request cmpl-fc604f27aabb4cfca9febd3ebd170b26-0.
INFO 03-02 01:12:43 [logger.py:42] Received request cmpl-d238f773a15845419398cce2fbc8dce0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:43 [async_llm.py:261] Added request cmpl-d238f773a15845419398cce2fbc8dce0-0.
INFO 03-02 01:12:44 [logger.py:42] Received request cmpl-b98202abb0ee4bbc8f069634e57652e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:44 [async_llm.py:261] Added request cmpl-b98202abb0ee4bbc8f069634e57652e1-0.
INFO 03-02 01:12:45 [logger.py:42] Received request cmpl-5d9cd0f52c5847d49986a25d079e1053-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:45 [async_llm.py:261] Added request cmpl-5d9cd0f52c5847d49986a25d079e1053-0.
INFO 03-02 01:12:46 [logger.py:42] Received request cmpl-63f8fcfc1f4242be95107e9b6663dd7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:46 [async_llm.py:261] Added request cmpl-63f8fcfc1f4242be95107e9b6663dd7d-0.
INFO 03-02 01:12:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:12:47 [logger.py:42] Received request cmpl-57f4516e64f943328eb5c2f7fdbbf17b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:47 [async_llm.py:261] Added request cmpl-57f4516e64f943328eb5c2f7fdbbf17b-0.
INFO 03-02 01:12:48 [logger.py:42] Received request cmpl-efa5acd96ed3408f958da318e71b672f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:48 [async_llm.py:261] Added request cmpl-efa5acd96ed3408f958da318e71b672f-0.
INFO 03-02 01:12:49 [logger.py:42] Received request cmpl-e6dc4a9d12524c66bdd575f5314a9484-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:49 [async_llm.py:261] Added request cmpl-e6dc4a9d12524c66bdd575f5314a9484-0.
INFO 03-02 01:12:50 [logger.py:42] Received request cmpl-45c709a2716a4b199dc11e313b6acafe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:50 [async_llm.py:261] Added request cmpl-45c709a2716a4b199dc11e313b6acafe-0.
INFO 03-02 01:12:51 [logger.py:42] Received request cmpl-7ecd4bf2e2a346c6bf2fb7857d478f69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:51 [async_llm.py:261] Added request cmpl-7ecd4bf2e2a346c6bf2fb7857d478f69-0.
INFO 03-02 01:12:52 [logger.py:42] Received request cmpl-432438dcbe324fecb13872a8eb843ac3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:52 [async_llm.py:261] Added request cmpl-432438dcbe324fecb13872a8eb843ac3-0.
INFO 03-02 01:12:53 [logger.py:42] Received request cmpl-d2690b7be83c4a92b460628c02b62e0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:53 [async_llm.py:261] Added request cmpl-d2690b7be83c4a92b460628c02b62e0d-0.
INFO 03-02 01:12:54 [logger.py:42] Received request cmpl-84bceed86c284909aedc4add8b79256a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:54 [async_llm.py:261] Added request cmpl-84bceed86c284909aedc4add8b79256a-0.
INFO 03-02 01:12:56 [logger.py:42] Received request cmpl-67e191c5c3bb4958b0c13de6961d41cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:56 [async_llm.py:261] Added request cmpl-67e191c5c3bb4958b0c13de6961d41cb-0.
INFO 03-02 01:12:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:12:57 [logger.py:42] Received request cmpl-ab0f63ddad2b4956b3035754886f8914-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:57 [async_llm.py:261] Added request cmpl-ab0f63ddad2b4956b3035754886f8914-0.
INFO 03-02 01:12:58 [logger.py:42] Received request cmpl-b906d945a3ba4b5981abacd5b0d09422-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:58 [async_llm.py:261] Added request cmpl-b906d945a3ba4b5981abacd5b0d09422-0.
INFO 03-02 01:12:59 [logger.py:42] Received request cmpl-ebc6d9cc5bbe49fd92e0c8b72a0dcfaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:59 [async_llm.py:261] Added request cmpl-ebc6d9cc5bbe49fd92e0c8b72a0dcfaf-0.
INFO 03-02 01:13:00 [logger.py:42] Received request cmpl-db2631a79ff849eba3c996d92eea4b1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:00 [async_llm.py:261] Added request cmpl-db2631a79ff849eba3c996d92eea4b1b-0.
INFO 03-02 01:13:01 [logger.py:42] Received request cmpl-94b52489ab1f41c5ba297412b16b2c8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:01 [async_llm.py:261] Added request cmpl-94b52489ab1f41c5ba297412b16b2c8a-0.
INFO 03-02 01:13:02 [logger.py:42] Received request cmpl-be313e3d2f4e4166bd3b687a2c9f94cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:02 [async_llm.py:261] Added request cmpl-be313e3d2f4e4166bd3b687a2c9f94cc-0.
INFO 03-02 01:13:03 [logger.py:42] Received request cmpl-22a7e6d59c6b4acbb665b0f1157e00df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:03 [async_llm.py:261] Added request cmpl-22a7e6d59c6b4acbb665b0f1157e00df-0.
INFO 03-02 01:13:04 [logger.py:42] Received request cmpl-014aa9b08b6249bdb64ce80cc1e537fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:04 [async_llm.py:261] Added request cmpl-014aa9b08b6249bdb64ce80cc1e537fd-0.
INFO 03-02 01:13:05 [logger.py:42] Received request cmpl-5bd4eb16a7534a7abcc85c1087b4074a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:05 [async_llm.py:261] Added request cmpl-5bd4eb16a7534a7abcc85c1087b4074a-0.
INFO 03-02 01:13:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:13:06 [logger.py:42] Received request cmpl-b92c86b8b4b141e0990f30628cf67053-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:06 [async_llm.py:261] Added request cmpl-b92c86b8b4b141e0990f30628cf67053-0.
INFO 03-02 01:13:07 [logger.py:42] Received request cmpl-e27ae387fd2c4d869e45d1421ef595b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:07 [async_llm.py:261] Added request cmpl-e27ae387fd2c4d869e45d1421ef595b1-0.
INFO 03-02 01:13:09 [logger.py:42] Received request cmpl-a4ec8920e8044cb2b40e9c3e77c2a40e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:09 [async_llm.py:261] Added request cmpl-a4ec8920e8044cb2b40e9c3e77c2a40e-0.
INFO 03-02 01:13:10 [logger.py:42] Received request cmpl-6c6747a5b7844235bc85904c44ff30ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:10 [async_llm.py:261] Added request cmpl-6c6747a5b7844235bc85904c44ff30ca-0.
INFO 03-02 01:13:11 [logger.py:42] Received request cmpl-d8e137f6b34844f898e16cfd4fb352f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:11 [async_llm.py:261] Added request cmpl-d8e137f6b34844f898e16cfd4fb352f6-0.
INFO 03-02 01:13:12 [logger.py:42] Received request cmpl-10617694bac2411ab74c3b129311f5cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:12 [async_llm.py:261] Added request cmpl-10617694bac2411ab74c3b129311f5cd-0.
INFO 03-02 01:13:13 [logger.py:42] Received request cmpl-220c4cc7506e4768b14e22d136bf741e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:13 [async_llm.py:261] Added request cmpl-220c4cc7506e4768b14e22d136bf741e-0.
INFO 03-02 01:13:14 [logger.py:42] Received request cmpl-75f0dcb91fc54b10ac8995c3ca4f84ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:14 [async_llm.py:261] Added request cmpl-75f0dcb91fc54b10ac8995c3ca4f84ce-0.
INFO 03-02 01:13:15 [logger.py:42] Received request cmpl-bc8492cad8e244d797ef55fa6c35bafc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:15 [async_llm.py:261] Added request cmpl-bc8492cad8e244d797ef55fa6c35bafc-0.
INFO 03-02 01:13:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:13:16 [logger.py:42] Received request cmpl-4f4e47187a5540609aee0786fbcd9a10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:16 [async_llm.py:261] Added request cmpl-4f4e47187a5540609aee0786fbcd9a10-0.
INFO 03-02 01:13:17 [logger.py:42] Received request cmpl-46d56f0b05b84aadb1fa307295053426-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:17 [async_llm.py:261] Added request cmpl-46d56f0b05b84aadb1fa307295053426-0.
INFO 03-02 01:13:18 [logger.py:42] Received request cmpl-d5e6031d11104722ad55823edbd43463-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:18 [async_llm.py:261] Added request cmpl-d5e6031d11104722ad55823edbd43463-0.
INFO 03-02 01:13:19 [logger.py:42] Received request cmpl-4dd50b86920c4e31ba350bd1bcc51587-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:19 [async_llm.py:261] Added request cmpl-4dd50b86920c4e31ba350bd1bcc51587-0.
INFO 03-02 01:13:21 [logger.py:42] Received request cmpl-a171b6e9744d440aa03fd8e4c279bf6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:21 [async_llm.py:261] Added request cmpl-a171b6e9744d440aa03fd8e4c279bf6f-0.
INFO 03-02 01:13:22 [logger.py:42] Received request cmpl-c0ae104df9ee42ec99ca1cf2e7e026fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:22 [async_llm.py:261] Added request cmpl-c0ae104df9ee42ec99ca1cf2e7e026fc-0.
INFO 03-02 01:13:23 [logger.py:42] Received request cmpl-9655366c43f84cb8b2cd274a23824117-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:23 [async_llm.py:261] Added request cmpl-9655366c43f84cb8b2cd274a23824117-0.
INFO 03-02 01:13:24 [logger.py:42] Received request cmpl-a1ba1d89d48043b0a932464fa4763925-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:24 [async_llm.py:261] Added request cmpl-a1ba1d89d48043b0a932464fa4763925-0.
INFO 03-02 01:13:25 [logger.py:42] Received request cmpl-5def899e9fea4fc8877d0f0ad5a76841-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:25 [async_llm.py:261] Added request cmpl-5def899e9fea4fc8877d0f0ad5a76841-0.
INFO 03-02 01:13:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:13:26 [logger.py:42] Received request cmpl-61609e608263450a91f9da51efcaa12f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:26 [async_llm.py:261] Added request cmpl-61609e608263450a91f9da51efcaa12f-0.
INFO 03-02 01:13:27 [logger.py:42] Received request cmpl-bef0f9fb2ff342c2847349e67ab577ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:27 [async_llm.py:261] Added request cmpl-bef0f9fb2ff342c2847349e67ab577ed-0.
INFO 03-02 01:13:28 [logger.py:42] Received request cmpl-8cf46b68c941445e8339a0dae39ea09f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:28 [async_llm.py:261] Added request cmpl-8cf46b68c941445e8339a0dae39ea09f-0.
INFO 03-02 01:13:29 [logger.py:42] Received request cmpl-9be2ec2162664ed9a7d010a68aac579c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:29 [async_llm.py:261] Added request cmpl-9be2ec2162664ed9a7d010a68aac579c-0.
INFO 03-02 01:13:30 [logger.py:42] Received request cmpl-a564347b42b94a5a9a36432174b31f25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:30 [async_llm.py:261] Added request cmpl-a564347b42b94a5a9a36432174b31f25-0.
INFO 03-02 01:13:31 [logger.py:42] Received request cmpl-08347898a6fb4d64aa9591d219bc0b80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:31 [async_llm.py:261] Added request cmpl-08347898a6fb4d64aa9591d219bc0b80-0.
INFO 03-02 01:13:32 [logger.py:42] Received request cmpl-cc1a2c8b150f40e19f38a27f4829bcd9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:32 [async_llm.py:261] Added request cmpl-cc1a2c8b150f40e19f38a27f4829bcd9-0.
INFO 03-02 01:13:34 [logger.py:42] Received request cmpl-663164ba48474ce48ebff879587b512b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:34 [async_llm.py:261] Added request cmpl-663164ba48474ce48ebff879587b512b-0.
INFO 03-02 01:13:35 [logger.py:42] Received request cmpl-84129945466b4cef9cd808bf294fe364-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:35 [async_llm.py:261] Added request cmpl-84129945466b4cef9cd808bf294fe364-0.
INFO 03-02 01:13:36 [logger.py:42] Received request cmpl-13140c4720f9406cb62a63b84996611d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:36 [async_llm.py:261] Added request cmpl-13140c4720f9406cb62a63b84996611d-0.
INFO 03-02 01:13:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:13:37 [logger.py:42] Received request cmpl-b9505e53dd12447f940e6a6ab2cc01a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:37 [async_llm.py:261] Added request cmpl-b9505e53dd12447f940e6a6ab2cc01a4-0.
INFO 03-02 01:13:38 [logger.py:42] Received request cmpl-75b828569ab946d78968c308cd3d7fdc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:38 [async_llm.py:261] Added request cmpl-75b828569ab946d78968c308cd3d7fdc-0.
INFO 03-02 01:13:39 [logger.py:42] Received request cmpl-15bb670314c14a349742418ac5a7bdb4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:39 [async_llm.py:261] Added request cmpl-15bb670314c14a349742418ac5a7bdb4-0.
INFO 03-02 01:13:40 [logger.py:42] Received request cmpl-3066a8859f27425f976e51fd478a0f13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:40 [async_llm.py:261] Added request cmpl-3066a8859f27425f976e51fd478a0f13-0.
INFO 03-02 01:13:41 [logger.py:42] Received request cmpl-604c9a0f175a47f79e21fb6f50686c67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:41 [async_llm.py:261] Added request cmpl-604c9a0f175a47f79e21fb6f50686c67-0.
INFO 03-02 01:13:42 [logger.py:42] Received request cmpl-b75c6f4549a24488a85e8c11c854a47d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:42 [async_llm.py:261] Added request cmpl-b75c6f4549a24488a85e8c11c854a47d-0.
INFO 03-02 01:13:43 [logger.py:42] Received request cmpl-93422fef365a4f53b48d0fe7861f220c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:43 [async_llm.py:261] Added request cmpl-93422fef365a4f53b48d0fe7861f220c-0.
INFO 03-02 01:13:44 [logger.py:42] Received request cmpl-28b84a4917c2412ea1a79a50025473da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:44 [async_llm.py:261] Added request cmpl-28b84a4917c2412ea1a79a50025473da-0.
INFO 03-02 01:13:45 [logger.py:42] Received request cmpl-bdf4ba9ea7e4496898d41beb1da6594d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:45 [async_llm.py:261] Added request cmpl-bdf4ba9ea7e4496898d41beb1da6594d-0.
INFO 03-02 01:13:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:13:47 [logger.py:42] Received request cmpl-9c8255b85f95489f8e14930256decda5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:47 [async_llm.py:261] Added request cmpl-9c8255b85f95489f8e14930256decda5-0.
INFO 03-02 01:13:48 [logger.py:42] Received request cmpl-55a5dbed3fbb4f8f986302b1d7c74aad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:48 [async_llm.py:261] Added request cmpl-55a5dbed3fbb4f8f986302b1d7c74aad-0.
INFO 03-02 01:13:49 [logger.py:42] Received request cmpl-da803eddbd314cd4a9ec4f9ebd690e8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:49 [async_llm.py:261] Added request cmpl-da803eddbd314cd4a9ec4f9ebd690e8f-0.
INFO 03-02 01:13:50 [logger.py:42] Received request cmpl-0d09736c92cb42e3b2c3cf4db3813af9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:50 [async_llm.py:261] Added request cmpl-0d09736c92cb42e3b2c3cf4db3813af9-0.
INFO 03-02 01:13:51 [logger.py:42] Received request cmpl-80d5138816ee41cdb043b01cd77a2544-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:51 [async_llm.py:261] Added request cmpl-80d5138816ee41cdb043b01cd77a2544-0.
INFO 03-02 01:13:52 [logger.py:42] Received request cmpl-04332de1c7be4eb1b492810b8f2e1484-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:52 [async_llm.py:261] Added request cmpl-04332de1c7be4eb1b492810b8f2e1484-0.
INFO 03-02 01:13:53 [logger.py:42] Received request cmpl-6b7f96c009304fc5b46386c965e3caf6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:53 [async_llm.py:261] Added request cmpl-6b7f96c009304fc5b46386c965e3caf6-0.
INFO:  1.2.3.4:123 - "POST /v1/completions HTTP/1.1" 404 Not Found
INFO 03-02 01:13:54 [logger.py:42] Received request cmpl-b0f7639f6f0e49a9b3b130c731c7cee6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:54 [async_llm.py:261] Added request cmpl-b0f7639f6f0e49a9b3b130c731c7cee6-0.
INFO 03-02 01:13:55 [logger.py:42] Received request cmpl-ac6c71b638a6403096cdeff831104026-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:55 [async_llm.py:261] Added request cmpl-ac6c71b638a6403096cdeff831104026-0.
INFO 03-02 01:13:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:13:56 [logger.py:42] Received request cmpl-c7b8de92e1934f50ba5bbe5492e3d12f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:56 [async_llm.py:261] Added request cmpl-c7b8de92e1934f50ba5bbe5492e3d12f-0.
INFO 03-02 01:13:57 [logger.py:42] Received request cmpl-71563ab61d7949b4bf53dabe6051de14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:57 [async_llm.py:261] Added request cmpl-71563ab61d7949b4bf53dabe6051de14-0.
INFO 03-02 01:13:58 [logger.py:42] Received request cmpl-a23822cb7db64502b44e0d73395bea55-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:58 [async_llm.py:261] Added request cmpl-a23822cb7db64502b44e0d73395bea55-0.
INFO 03-02 01:14:00 [logger.py:42] Received request cmpl-ebbc625d4fe3469694ed5fd9633b97e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:00 [async_llm.py:261] Added request cmpl-ebbc625d4fe3469694ed5fd9633b97e7-0.
INFO 03-02 01:14:01 [logger.py:42] Received request cmpl-b33f85b61f0e440da369fb4b7901a97f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:01 [async_llm.py:261] Added request cmpl-b33f85b61f0e440da369fb4b7901a97f-0.
INFO 03-02 01:14:02 [logger.py:42] Received request cmpl-ab5877b1d0ac4b7092a165c358f636e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:02 [async_llm.py:261] Added request cmpl-ab5877b1d0ac4b7092a165c358f636e5-0.
INFO 03-02 01:14:03 [logger.py:42] Received request cmpl-9de53d3392df43e9849bf2a893fb380f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:03 [async_llm.py:261] Added request cmpl-9de53d3392df43e9849bf2a893fb380f-0.
INFO 03-02 01:14:04 [logger.py:42] Received request cmpl-fa84bc0312864ebd82a56b0564b43f5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:04 [async_llm.py:261] Added request cmpl-fa84bc0312864ebd82a56b0564b43f5b-0.
INFO 03-02 01:14:05 [logger.py:42] Received request cmpl-3735644b757f474397fc7ef9ea9f0a3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:05 [async_llm.py:261] Added request cmpl-3735644b757f474397fc7ef9ea9f0a3f-0.
INFO 03-02 01:14:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:14:06 [logger.py:42] Received request cmpl-7484a9b33fcf46ecad6002a7976f443d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:06 [async_llm.py:261] Added request cmpl-7484a9b33fcf46ecad6002a7976f443d-0.
INFO 03-02 01:14:07 [logger.py:42] Received request cmpl-6c9578d3b93b42dcaabe9c864d9dd5f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:07 [async_llm.py:261] Added request cmpl-6c9578d3b93b42dcaabe9c864d9dd5f2-0.
INFO 03-02 01:14:08 [logger.py:42] Received request cmpl-d390f800f5b045f6a02182cf6588ca53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:08 [async_llm.py:261] Added request cmpl-d390f800f5b045f6a02182cf6588ca53-0.
INFO 03-02 01:14:09 [logger.py:42] Received request cmpl-803fa0024bd34d7f8d9d0b2bd1c0fcbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:09 [async_llm.py:261] Added request cmpl-803fa0024bd34d7f8d9d0b2bd1c0fcbd-0.
INFO 03-02 01:14:10 [logger.py:42] Received request cmpl-ef6cc52cec384fbdac36bbf0d3deaac1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:10 [async_llm.py:261] Added request cmpl-ef6cc52cec384fbdac36bbf0d3deaac1-0.
INFO 03-02 01:14:11 [logger.py:42] Received request cmpl-a20a9041ab8948849057c3ab2badff93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:11 [async_llm.py:261] Added request cmpl-a20a9041ab8948849057c3ab2badff93-0.
INFO 03-02 01:14:13 [logger.py:42] Received request cmpl-accc2ce3ac9b4d2c99a5a7e0247db553-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:13 [async_llm.py:261] Added request cmpl-accc2ce3ac9b4d2c99a5a7e0247db553-0.
INFO 03-02 01:14:14 [logger.py:42] Received request cmpl-6753a9437ebb477e976bec47c1cd3769-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:14 [async_llm.py:261] Added request cmpl-6753a9437ebb477e976bec47c1cd3769-0.
INFO 03-02 01:14:15 [logger.py:42] Received request cmpl-22573d031bc74d319cf3996020fa4379-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:15 [async_llm.py:261] Added request cmpl-22573d031bc74d319cf3996020fa4379-0.
INFO 03-02 01:14:16 [logger.py:42] Received request cmpl-4c3445a631d247248cd50f9679c6cc4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:16 [async_llm.py:261] Added request cmpl-4c3445a631d247248cd50f9679c6cc4b-0.
INFO 03-02 01:14:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:14:17 [logger.py:42] Received request cmpl-0163c622970a4f9db5e29756dc7be652-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:17 [async_llm.py:261] Added request cmpl-0163c622970a4f9db5e29756dc7be652-0.
INFO 03-02 01:14:18 [logger.py:42] Received request cmpl-f3cf70a79844469286b735879cbc0501-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:18 [async_llm.py:261] Added request cmpl-f3cf70a79844469286b735879cbc0501-0.
INFO 03-02 01:14:19 [logger.py:42] Received request cmpl-785ec9db1d464dd08be32ec38619fde9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:19 [async_llm.py:261] Added request cmpl-785ec9db1d464dd08be32ec38619fde9-0.
INFO 03-02 01:14:20 [logger.py:42] Received request cmpl-96e9f72ad994415db0a7b7da779da15b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:20 [async_llm.py:261] Added request cmpl-96e9f72ad994415db0a7b7da779da15b-0.
INFO 03-02 01:14:21 [logger.py:42] Received request cmpl-93b2eccdb5614e0dad0ad7b9f6673ade-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:21 [async_llm.py:261] Added request cmpl-93b2eccdb5614e0dad0ad7b9f6673ade-0.
INFO 03-02 01:14:22 [logger.py:42] Received request cmpl-739e55325ab048e098eb3873a6524330-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:22 [async_llm.py:261] Added request cmpl-739e55325ab048e098eb3873a6524330-0.
INFO 03-02 01:14:23 [logger.py:42] Received request cmpl-2fcabb704ae24700b176fb23b6449681-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:23 [async_llm.py:261] Added request cmpl-2fcabb704ae24700b176fb23b6449681-0.
INFO 03-02 01:14:24 [logger.py:42] Received request cmpl-62232646e874457182ecf42b90874cc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:24 [async_llm.py:261] Added request cmpl-62232646e874457182ecf42b90874cc9-0.
INFO 03-02 01:14:26 [logger.py:42] Received request cmpl-7c8d12c3e2d041f494eba9222bbdd630-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:26 [async_llm.py:261] Added request cmpl-7c8d12c3e2d041f494eba9222bbdd630-0.
INFO 03-02 01:14:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:14:27 [logger.py:42] Received request cmpl-7d410a967aba4d6ab82b9316bb2674f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:27 [async_llm.py:261] Added request cmpl-7d410a967aba4d6ab82b9316bb2674f9-0.
INFO 03-02 01:14:28 [logger.py:42] Received request cmpl-6f34a8dc6eba4863839b61cef42d7832-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:28 [async_llm.py:261] Added request cmpl-6f34a8dc6eba4863839b61cef42d7832-0.
INFO 03-02 01:14:29 [logger.py:42] Received request cmpl-63dab5a792d44514ac102570f71468c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:29 [async_llm.py:261] Added request cmpl-63dab5a792d44514ac102570f71468c6-0.
INFO 03-02 01:14:30 [logger.py:42] Received request cmpl-b99040460f2249d59ee812c4b5209b60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:30 [async_llm.py:261] Added request cmpl-b99040460f2249d59ee812c4b5209b60-0.
INFO 03-02 01:14:31 [logger.py:42] Received request cmpl-4278cd3c05af4db7ab2cb5905abdfdbb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:31 [async_llm.py:261] Added request cmpl-4278cd3c05af4db7ab2cb5905abdfdbb-0.
INFO 03-02 01:14:32 [logger.py:42] Received request cmpl-f3d89eb11e7c43fcabf472a943de63be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:32 [async_llm.py:261] Added request cmpl-f3d89eb11e7c43fcabf472a943de63be-0.
INFO 03-02 01:14:33 [logger.py:42] Received request cmpl-5ba6ad7a41614d8daa1474b308052b88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:33 [async_llm.py:261] Added request cmpl-5ba6ad7a41614d8daa1474b308052b88-0.
INFO 03-02 01:14:34 [logger.py:42] Received request cmpl-bab4f47c55de4962b94abac3c97f7d89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:34 [async_llm.py:261] Added request cmpl-bab4f47c55de4962b94abac3c97f7d89-0.
INFO 03-02 01:14:35 [logger.py:42] Received request cmpl-c0361eea38ea4f75a97194cf260b9e1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:35 [async_llm.py:261] Added request cmpl-c0361eea38ea4f75a97194cf260b9e1a-0.
INFO 03-02 01:14:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:14:36 [logger.py:42] Received request cmpl-c7c59a1395394220865f4694ce2fa1c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:36 [async_llm.py:261] Added request cmpl-c7c59a1395394220865f4694ce2fa1c0-0.
INFO 03-02 01:14:37 [logger.py:42] Received request cmpl-b87f9c4b0d984c8c8575b23dbed4d766-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:37 [async_llm.py:261] Added request cmpl-b87f9c4b0d984c8c8575b23dbed4d766-0.
INFO 03-02 01:14:39 [logger.py:42] Received request cmpl-7b0025ea7d3547cba4c95f1b10aa8d60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:39 [async_llm.py:261] Added request cmpl-7b0025ea7d3547cba4c95f1b10aa8d60-0.
INFO 03-02 01:14:40 [logger.py:42] Received request cmpl-4bfb3a8261004328b1e99491ea0c616a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:40 [async_llm.py:261] Added request cmpl-4bfb3a8261004328b1e99491ea0c616a-0.
INFO 03-02 01:14:41 [logger.py:42] Received request cmpl-5f4afa6787e84927aa78e4517acc835a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:41 [async_llm.py:261] Added request cmpl-5f4afa6787e84927aa78e4517acc835a-0.
INFO 03-02 01:14:42 [logger.py:42] Received request cmpl-1145cfabad324e4ba7a880f7356f4047-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:42 [async_llm.py:261] Added request cmpl-1145cfabad324e4ba7a880f7356f4047-0.
INFO 03-02 01:14:43 [logger.py:42] Received request cmpl-c972ee7da8f84c328056f0116d0fcd62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:43 [async_llm.py:261] Added request cmpl-c972ee7da8f84c328056f0116d0fcd62-0.
INFO 03-02 01:14:44 [logger.py:42] Received request cmpl-db8ffa16e4ae40759d01c89c5d3e8464-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:44 [async_llm.py:261] Added request cmpl-db8ffa16e4ae40759d01c89c5d3e8464-0.
INFO 03-02 01:14:45 [logger.py:42] Received request cmpl-46c205f1b88c40fd98f338ba0cd7de57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:45 [async_llm.py:261] Added request cmpl-46c205f1b88c40fd98f338ba0cd7de57-0.
INFO 03-02 01:14:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:14:46 [logger.py:42] Received request cmpl-4fa8427a7db148cfa935da806d9365ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:46 [async_llm.py:261] Added request cmpl-4fa8427a7db148cfa935da806d9365ab-0.
INFO 03-02 01:14:47 [logger.py:42] Received request cmpl-b9af533d1fd64e91a778bbb1a02ee881-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:47 [async_llm.py:261] Added request cmpl-b9af533d1fd64e91a778bbb1a02ee881-0.
INFO 03-02 01:14:48 [logger.py:42] Received request cmpl-5a0d440efca340659385cab5040810ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:48 [async_llm.py:261] Added request cmpl-5a0d440efca340659385cab5040810ba-0.
INFO 03-02 01:14:49 [logger.py:42] Received request cmpl-41b4592f75964a3ca4dbf23b7e1d432c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:49 [async_llm.py:261] Added request cmpl-41b4592f75964a3ca4dbf23b7e1d432c-0.
INFO 03-02 01:14:50 [logger.py:42] Received request cmpl-d6a1337de84140d8ae7b4fccbc2104d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:50 [async_llm.py:261] Added request cmpl-d6a1337de84140d8ae7b4fccbc2104d7-0.
INFO 03-02 01:14:52 [logger.py:42] Received request cmpl-6e9c2c67942843c18497013cc6c81e4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:52 [async_llm.py:261] Added request cmpl-6e9c2c67942843c18497013cc6c81e4e-0.
INFO 03-02 01:14:53 [logger.py:42] Received request cmpl-0f1b408c8f024f24a87c62b5baff97ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:53 [async_llm.py:261] Added request cmpl-0f1b408c8f024f24a87c62b5baff97ea-0.
INFO 03-02 01:14:54 [logger.py:42] Received request cmpl-88d70738617d40eaa23043508932ab6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:54 [async_llm.py:261] Added request cmpl-88d70738617d40eaa23043508932ab6c-0.
INFO 03-02 01:14:55 [logger.py:42] Received request cmpl-82a5610ef2ce492a9202f81def811c84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:55 [async_llm.py:261] Added request cmpl-82a5610ef2ce492a9202f81def811c84-0.
INFO 03-02 01:14:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:14:56 [logger.py:42] Received request cmpl-e887518b0ea14c2387672a4c8d910789-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:56 [async_llm.py:261] Added request cmpl-e887518b0ea14c2387672a4c8d910789-0.
INFO 03-02 01:14:57 [logger.py:42] Received request cmpl-b362900b54a74759b0631376be14fd18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:57 [async_llm.py:261] Added request cmpl-b362900b54a74759b0631376be14fd18-0.
INFO 03-02 01:14:58 [logger.py:42] Received request cmpl-fbd2559617e24b13bb5bca84046c7542-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:58 [async_llm.py:261] Added request cmpl-fbd2559617e24b13bb5bca84046c7542-0.
INFO 03-02 01:14:59 [logger.py:42] Received request cmpl-91fd6804f28c4b539ade0a106ab76b91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:59 [async_llm.py:261] Added request cmpl-91fd6804f28c4b539ade0a106ab76b91-0.
INFO 03-02 01:15:00 [logger.py:42] Received request cmpl-46af99fba5164f9da1f4ff5eec1dc5c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:00 [async_llm.py:261] Added request cmpl-46af99fba5164f9da1f4ff5eec1dc5c3-0.
INFO 03-02 01:15:01 [logger.py:42] Received request cmpl-de8669906780439fa8df8e12a90bdc5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:01 [async_llm.py:261] Added request cmpl-de8669906780439fa8df8e12a90bdc5e-0.
INFO 03-02 01:15:02 [logger.py:42] Received request cmpl-0884887f827b46788891187c15217e05-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:02 [async_llm.py:261] Added request cmpl-0884887f827b46788891187c15217e05-0.
INFO 03-02 01:15:04 [logger.py:42] Received request cmpl-0707096aecf14711847817a8ea9476bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:04 [async_llm.py:261] Added request cmpl-0707096aecf14711847817a8ea9476bf-0.
INFO 03-02 01:15:05 [logger.py:42] Received request cmpl-3bf86c5bcde24884bf9d2dd5a12ae0e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:05 [async_llm.py:261] Added request cmpl-3bf86c5bcde24884bf9d2dd5a12ae0e8-0.
INFO 03-02 01:15:06 [logger.py:42] Received request cmpl-44388bdc050e47f4813c1f8770b9f97c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:06 [async_llm.py:261] Added request cmpl-44388bdc050e47f4813c1f8770b9f97c-0.
INFO 03-02 01:15:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:15:07 [logger.py:42] Received request cmpl-287d7da2c0d9441b91f4c24517a70247-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:07 [async_llm.py:261] Added request cmpl-287d7da2c0d9441b91f4c24517a70247-0.
INFO 03-02 01:15:08 [logger.py:42] Received request cmpl-cb59e98ae96a4dbf851eda980f437958-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:08 [async_llm.py:261] Added request cmpl-cb59e98ae96a4dbf851eda980f437958-0.
INFO 03-02 01:15:09 [logger.py:42] Received request cmpl-29c32b8faa9b4811bef4faeb21f9ce2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:09 [async_llm.py:261] Added request cmpl-29c32b8faa9b4811bef4faeb21f9ce2b-0.
INFO 03-02 01:15:10 [logger.py:42] Received request cmpl-1e61dfb79203415f85357d6301b2b7df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:10 [async_llm.py:261] Added request cmpl-1e61dfb79203415f85357d6301b2b7df-0.
INFO 03-02 01:15:11 [logger.py:42] Received request cmpl-00d0fe27d345442b95f1643f4e5a6147-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:11 [async_llm.py:261] Added request cmpl-00d0fe27d345442b95f1643f4e5a6147-0.
INFO 03-02 01:15:12 [logger.py:42] Received request cmpl-d270a3396255489ab74c8c695d09506a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:12 [async_llm.py:261] Added request cmpl-d270a3396255489ab74c8c695d09506a-0.
INFO 03-02 01:15:13 [logger.py:42] Received request cmpl-bec7126f9fa347b89bd759aebc624997-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:13 [async_llm.py:261] Added request cmpl-bec7126f9fa347b89bd759aebc624997-0.
INFO 03-02 01:15:14 [logger.py:42] Received request cmpl-206fad0eb0564d158b962deb012c9a91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:14 [async_llm.py:261] Added request cmpl-206fad0eb0564d158b962deb012c9a91-0.
INFO 03-02 01:15:15 [logger.py:42] Received request cmpl-ac59b231026049258346cf5ad58cf786-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:15 [async_llm.py:261] Added request cmpl-ac59b231026049258346cf5ad58cf786-0.
INFO 03-02 01:15:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:15:17 [logger.py:42] Received request cmpl-3fa378618d4d44e09c6fc65434c2e866-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:17 [async_llm.py:261] Added request cmpl-3fa378618d4d44e09c6fc65434c2e866-0.
INFO 03-02 01:15:18 [logger.py:42] Received request cmpl-5abd1ea02a934e95a27b4caec0f9acac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:18 [async_llm.py:261] Added request cmpl-5abd1ea02a934e95a27b4caec0f9acac-0.
INFO 03-02 01:15:19 [logger.py:42] Received request cmpl-e39246605b03421fba2785f0a57236db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:19 [async_llm.py:261] Added request cmpl-e39246605b03421fba2785f0a57236db-0.
INFO 03-02 01:15:20 [logger.py:42] Received request cmpl-26c5eadee3474037a915b8cecbf0cee6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:20 [async_llm.py:261] Added request cmpl-26c5eadee3474037a915b8cecbf0cee6-0.
INFO 03-02 01:15:21 [logger.py:42] Received request cmpl-784eb55d128046ebb58bc61e8cbee729-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:21 [async_llm.py:261] Added request cmpl-784eb55d128046ebb58bc61e8cbee729-0.
INFO 03-02 01:15:22 [logger.py:42] Received request cmpl-5e2d74127a6a4500bd22069250e1dc00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:22 [async_llm.py:261] Added request cmpl-5e2d74127a6a4500bd22069250e1dc00-0.
INFO 03-02 01:15:23 [logger.py:42] Received request cmpl-8563538ae6e14b94ae80330cfc45e1e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:23 [async_llm.py:261] Added request cmpl-8563538ae6e14b94ae80330cfc45e1e7-0.
INFO 03-02 01:15:24 [logger.py:42] Received request cmpl-6c3f542a9aae409b8a73b309587339a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:24 [async_llm.py:261] Added request cmpl-6c3f542a9aae409b8a73b309587339a4-0.
INFO 03-02 01:15:25 [logger.py:42] Received request cmpl-6c1a852651a44fb291cb228b1a753a92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:25 [async_llm.py:261] Added request cmpl-6c1a852651a44fb291cb228b1a753a92-0.
INFO 03-02 01:15:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:15:26 [logger.py:42] Received request cmpl-a3c7e536a5af495b954aa7754a26b783-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:26 [async_llm.py:261] Added request cmpl-a3c7e536a5af495b954aa7754a26b783-0.
INFO 03-02 01:15:27 [logger.py:42] Received request cmpl-cc4ee69c8b1d4b6b90658d7e3ffcc6b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:27 [async_llm.py:261] Added request cmpl-cc4ee69c8b1d4b6b90658d7e3ffcc6b9-0.
INFO 03-02 01:15:28 [logger.py:42] Received request cmpl-e4fab5e2aa1449278253d001d800d6c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:28 [async_llm.py:261] Added request cmpl-e4fab5e2aa1449278253d001d800d6c6-0.
INFO 03-02 01:15:30 [logger.py:42] Received request cmpl-6284cd59d74c4f968badd7696ddebf33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:30 [async_llm.py:261] Added request cmpl-6284cd59d74c4f968badd7696ddebf33-0.
INFO 03-02 01:15:31 [logger.py:42] Received request cmpl-a6182283ab6a4efaa6bead77206de8cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:31 [async_llm.py:261] Added request cmpl-a6182283ab6a4efaa6bead77206de8cf-0.
INFO 03-02 01:15:32 [logger.py:42] Received request cmpl-a45747d649cc43d6ac150522958e7f72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:32 [async_llm.py:261] Added request cmpl-a45747d649cc43d6ac150522958e7f72-0.
INFO 03-02 01:15:33 [logger.py:42] Received request cmpl-50ca0f379b37472db55db79de591a190-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:33 [async_llm.py:261] Added request cmpl-50ca0f379b37472db55db79de591a190-0.
INFO 03-02 01:15:34 [logger.py:42] Received request cmpl-ea6cac4a60f1440f8111c62394ecd836-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:34 [async_llm.py:261] Added request cmpl-ea6cac4a60f1440f8111c62394ecd836-0.
INFO 03-02 01:15:35 [logger.py:42] Received request cmpl-4086c482f2c64f62aab981948b5e1be6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:35 [async_llm.py:261] Added request cmpl-4086c482f2c64f62aab981948b5e1be6-0.
INFO 03-02 01:15:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:15:36 [logger.py:42] Received request cmpl-89e771260db14a7ea8f3793d54fdf9f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:36 [async_llm.py:261] Added request cmpl-89e771260db14a7ea8f3793d54fdf9f8-0.
INFO 03-02 01:15:37 [logger.py:42] Received request cmpl-f5709d25a6fb40e2b00e4070881d7aa9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:37 [async_llm.py:261] Added request cmpl-f5709d25a6fb40e2b00e4070881d7aa9-0.
INFO 03-02 01:15:38 [logger.py:42] Received request cmpl-93c10f4718364dd6853d1cfbbc07b668-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:38 [async_llm.py:261] Added request cmpl-93c10f4718364dd6853d1cfbbc07b668-0.
INFO 03-02 01:15:39 [logger.py:42] Received request cmpl-f95460dfdace474cae668834ec5f0dbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:39 [async_llm.py:261] Added request cmpl-f95460dfdace474cae668834ec5f0dbd-0.
INFO 03-02 01:15:40 [logger.py:42] Received request cmpl-33ddb5f1c23f419089d26650efa26ed3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:40 [async_llm.py:261] Added request cmpl-33ddb5f1c23f419089d26650efa26ed3-0.
INFO 03-02 01:15:41 [logger.py:42] Received request cmpl-a2f9e141340847f7b6970efd7df4e79e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:41 [async_llm.py:261] Added request cmpl-a2f9e141340847f7b6970efd7df4e79e-0.
INFO 03-02 01:15:43 [logger.py:42] Received request cmpl-74101f4d5a054e3da7a33619f0da20da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:43 [async_llm.py:261] Added request cmpl-74101f4d5a054e3da7a33619f0da20da-0.
INFO 03-02 01:15:44 [logger.py:42] Received request cmpl-ebb5773e89a24eacbcb5d3571f99fadc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:44 [async_llm.py:261] Added request cmpl-ebb5773e89a24eacbcb5d3571f99fadc-0.
INFO 03-02 01:15:45 [logger.py:42] Received request cmpl-b7dbfebe6d8b490fb307b89d4b76e719-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:45 [async_llm.py:261] Added request cmpl-b7dbfebe6d8b490fb307b89d4b76e719-0.
INFO 03-02 01:15:46 [logger.py:42] Received request cmpl-3c4062c6753c42d480ffeda0b4eee5e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:46 [async_llm.py:261] Added request cmpl-3c4062c6753c42d480ffeda0b4eee5e1-0.
INFO 03-02 01:15:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:15:47 [logger.py:42] Received request cmpl-14ef467648e34bd78fa411478415329e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:47 [async_llm.py:261] Added request cmpl-14ef467648e34bd78fa411478415329e-0.
INFO 03-02 01:15:48 [logger.py:42] Received request cmpl-e4be985370fa47a281114295dcb16f8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:48 [async_llm.py:261] Added request cmpl-e4be985370fa47a281114295dcb16f8f-0.
INFO 03-02 01:15:49 [logger.py:42] Received request cmpl-dfefe5bb41394ba5a7af41fae5d71e4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:49 [async_llm.py:261] Added request cmpl-dfefe5bb41394ba5a7af41fae5d71e4a-0.
INFO 03-02 01:15:50 [logger.py:42] Received request cmpl-47deb98e09f64015bc58e7807115fa7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:50 [async_llm.py:261] Added request cmpl-47deb98e09f64015bc58e7807115fa7d-0.
INFO 03-02 01:15:51 [logger.py:42] Received request cmpl-05b44dbe5305471ea1598657333aebd6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:51 [async_llm.py:261] Added request cmpl-05b44dbe5305471ea1598657333aebd6-0.
INFO 03-02 01:15:52 [logger.py:42] Received request cmpl-fe0d3ea4580f4d90bc599e14a04ea41e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:52 [async_llm.py:261] Added request cmpl-fe0d3ea4580f4d90bc599e14a04ea41e-0.
INFO 03-02 01:15:53 [logger.py:42] Received request cmpl-c9c528ee577d46998a781427491f818c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:53 [async_llm.py:261] Added request cmpl-c9c528ee577d46998a781427491f818c-0.
INFO 03-02 01:15:54 [logger.py:42] Received request cmpl-54428b83d8914151862ddab7d9c89165-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:54 [async_llm.py:261] Added request cmpl-54428b83d8914151862ddab7d9c89165-0.
INFO 03-02 01:15:56 [logger.py:42] Received request cmpl-e4789f16133e43388cac59e9ec279da9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:56 [async_llm.py:261] Added request cmpl-e4789f16133e43388cac59e9ec279da9-0.
INFO 03-02 01:15:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:15:57 [logger.py:42] Received request cmpl-fb48375b99154cad8c3f13586391e17e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:57 [async_llm.py:261] Added request cmpl-fb48375b99154cad8c3f13586391e17e-0.
INFO 03-02 01:15:58 [logger.py:42] Received request cmpl-14027c6b338547e7b55bbfc26e2c217e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:58 [async_llm.py:261] Added request cmpl-14027c6b338547e7b55bbfc26e2c217e-0.
INFO 03-02 01:15:59 [logger.py:42] Received request cmpl-872c8749402b45338296f584ffd9bc2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:59 [async_llm.py:261] Added request cmpl-872c8749402b45338296f584ffd9bc2d-0.
INFO 03-02 01:16:00 [logger.py:42] Received request cmpl-7769d24292704b65a53b2c5d4d126b79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:00 [async_llm.py:261] Added request cmpl-7769d24292704b65a53b2c5d4d126b79-0.
INFO 03-02 01:16:01 [logger.py:42] Received request cmpl-2d090dd137974fbea0f2b2b1c33aa9cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:01 [async_llm.py:261] Added request cmpl-2d090dd137974fbea0f2b2b1c33aa9cf-0.
INFO 03-02 01:16:02 [logger.py:42] Received request cmpl-b18986e37a1e43b2b8faeff9549321c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:02 [async_llm.py:261] Added request cmpl-b18986e37a1e43b2b8faeff9549321c8-0.
INFO 03-02 01:16:03 [logger.py:42] Received request cmpl-b5723f548c574bc8bb4164da96debe3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:03 [async_llm.py:261] Added request cmpl-b5723f548c574bc8bb4164da96debe3d-0.
INFO 03-02 01:16:04 [logger.py:42] Received request cmpl-07e5959e69564a41a1d2ebdf0e9775c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:04 [async_llm.py:261] Added request cmpl-07e5959e69564a41a1d2ebdf0e9775c9-0.
INFO 03-02 01:16:05 [logger.py:42] Received request cmpl-d381946d3ed740198fb80901ed38d7f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:05 [async_llm.py:261] Added request cmpl-d381946d3ed740198fb80901ed38d7f7-0.
INFO 03-02 01:16:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:16:06 [logger.py:42] Received request cmpl-908c3d91e8074abe91ff6c6932b7f63d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:06 [async_llm.py:261] Added request cmpl-908c3d91e8074abe91ff6c6932b7f63d-0.
INFO 03-02 01:16:07 [logger.py:42] Received request cmpl-f346c051296b4ca59337e6fa7dbfb368-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:07 [async_llm.py:261] Added request cmpl-f346c051296b4ca59337e6fa7dbfb368-0.
INFO 03-02 01:16:09 [logger.py:42] Received request cmpl-13f6250cdb934232a6713d1de893ad8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:09 [async_llm.py:261] Added request cmpl-13f6250cdb934232a6713d1de893ad8a-0.
INFO 03-02 01:16:10 [logger.py:42] Received request cmpl-502f160a418f429c8372b7e1f9e1a268-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:10 [async_llm.py:261] Added request cmpl-502f160a418f429c8372b7e1f9e1a268-0.
INFO 03-02 01:16:11 [logger.py:42] Received request cmpl-1940d5e9a6374bb3bd7aaf418c3af69b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:11 [async_llm.py:261] Added request cmpl-1940d5e9a6374bb3bd7aaf418c3af69b-0.
INFO 03-02 01:16:12 [logger.py:42] Received request cmpl-1b3072c59550416aa1c5f0fff56b6b7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:12 [async_llm.py:261] Added request cmpl-1b3072c59550416aa1c5f0fff56b6b7b-0.
INFO 03-02 01:16:13 [logger.py:42] Received request cmpl-31dcebcb5bba4c9392bd4d87ecdc0e83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:13 [async_llm.py:261] Added request cmpl-31dcebcb5bba4c9392bd4d87ecdc0e83-0.
INFO 03-02 01:16:14 [logger.py:42] Received request cmpl-0f6747c4c19140969716f65922ca65d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:14 [async_llm.py:261] Added request cmpl-0f6747c4c19140969716f65922ca65d7-0.
INFO 03-02 01:16:15 [logger.py:42] Received request cmpl-371487b0aa474954ac0cbc65e4e9272c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:15 [async_llm.py:261] Added request cmpl-371487b0aa474954ac0cbc65e4e9272c-0.
INFO 03-02 01:16:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:16:16 [logger.py:42] Received request cmpl-05351fba6b144a3b85ab8a6880bd8afa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:16 [async_llm.py:261] Added request cmpl-05351fba6b144a3b85ab8a6880bd8afa-0.
INFO 03-02 01:16:17 [logger.py:42] Received request cmpl-8f661d1f69c743fcaf52249f488a5bd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:17 [async_llm.py:261] Added request cmpl-8f661d1f69c743fcaf52249f488a5bd3-0.
INFO 03-02 01:16:18 [logger.py:42] Received request cmpl-6b9a2050cc9d4792877dbd2251473804-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:18 [async_llm.py:261] Added request cmpl-6b9a2050cc9d4792877dbd2251473804-0.
INFO 03-02 01:16:19 [logger.py:42] Received request cmpl-24959bce1582497d9f5cda8adfd48c99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:19 [async_llm.py:261] Added request cmpl-24959bce1582497d9f5cda8adfd48c99-0.
INFO 03-02 01:16:20 [logger.py:42] Received request cmpl-83baacbdf3434be48745187ee66edb55-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:20 [async_llm.py:261] Added request cmpl-83baacbdf3434be48745187ee66edb55-0.
INFO 03-02 01:16:22 [logger.py:42] Received request cmpl-836ef4f8a5cc4223ad2e4f1532bd42e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:22 [async_llm.py:261] Added request cmpl-836ef4f8a5cc4223ad2e4f1532bd42e5-0.
INFO 03-02 01:16:23 [logger.py:42] Received request cmpl-e416868d183f4170814e83029e5f39e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:23 [async_llm.py:261] Added request cmpl-e416868d183f4170814e83029e5f39e5-0.
INFO 03-02 01:16:24 [logger.py:42] Received request cmpl-9da9d9cf53664facae4fb8b2fc957bf8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:24 [async_llm.py:261] Added request cmpl-9da9d9cf53664facae4fb8b2fc957bf8-0.
INFO 03-02 01:16:25 [logger.py:42] Received request cmpl-7f95841ac36b4333a170eb2ab7b7e849-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:25 [async_llm.py:261] Added request cmpl-7f95841ac36b4333a170eb2ab7b7e849-0.
INFO 03-02 01:16:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:16:26 [logger.py:42] Received request cmpl-a4bb594f14334865b8c03adafcd2cf4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:26 [async_llm.py:261] Added request cmpl-a4bb594f14334865b8c03adafcd2cf4c-0.
INFO 03-02 01:16:27 [logger.py:42] Received request cmpl-cb150dc0c96b477ca116556445f03f7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:27 [async_llm.py:261] Added request cmpl-cb150dc0c96b477ca116556445f03f7a-0.
INFO 03-02 01:16:28 [logger.py:42] Received request cmpl-5e8d2f002ab24a9684a0fbeac4f9407d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:28 [async_llm.py:261] Added request cmpl-5e8d2f002ab24a9684a0fbeac4f9407d-0.
INFO 03-02 01:16:29 [logger.py:42] Received request cmpl-fe0cd04a51b94b7dbc498272281ce41a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:29 [async_llm.py:261] Added request cmpl-fe0cd04a51b94b7dbc498272281ce41a-0.
INFO 03-02 01:16:30 [logger.py:42] Received request cmpl-ff1086e1a41049e893e63d8731d4ff9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:30 [async_llm.py:261] Added request cmpl-ff1086e1a41049e893e63d8731d4ff9b-0.
INFO 03-02 01:16:31 [logger.py:42] Received request cmpl-7b638fca0def4c119ac4255f398b54dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:31 [async_llm.py:261] Added request cmpl-7b638fca0def4c119ac4255f398b54dc-0.
INFO 03-02 01:16:32 [logger.py:42] Received request cmpl-4841cdd2d3524e008b0f634fde3c3854-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:32 [async_llm.py:261] Added request cmpl-4841cdd2d3524e008b0f634fde3c3854-0.
INFO 03-02 01:16:33 [logger.py:42] Received request cmpl-d00725d13a8a4efcb9aab27cd834127a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:33 [async_llm.py:261] Added request cmpl-d00725d13a8a4efcb9aab27cd834127a-0.
INFO 03-02 01:16:35 [logger.py:42] Received request cmpl-2a91ba63bf074ddc88fb9a4108de937e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:35 [async_llm.py:261] Added request cmpl-2a91ba63bf074ddc88fb9a4108de937e-0.
INFO 03-02 01:16:36 [logger.py:42] Received request cmpl-0fa69e249a504bcfad9b73a3c003165a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:36 [async_llm.py:261] Added request cmpl-0fa69e249a504bcfad9b73a3c003165a-0.
INFO 03-02 01:16:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:16:37 [logger.py:42] Received request cmpl-e44322adf9264f259801415f3a63995d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:37 [async_llm.py:261] Added request cmpl-e44322adf9264f259801415f3a63995d-0.
INFO 03-02 01:16:38 [logger.py:42] Received request cmpl-9400eb67dc624c73b27204bcdb788cf2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:38 [async_llm.py:261] Added request cmpl-9400eb67dc624c73b27204bcdb788cf2-0.
INFO 03-02 01:16:39 [logger.py:42] Received request cmpl-9c67dd9f0c524a7fac650aa8cd302252-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:39 [async_llm.py:261] Added request cmpl-9c67dd9f0c524a7fac650aa8cd302252-0.
INFO 03-02 01:16:40 [logger.py:42] Received request cmpl-b16e92f64cf04f32b9a2076279e14efe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:40 [async_llm.py:261] Added request cmpl-b16e92f64cf04f32b9a2076279e14efe-0.
INFO 03-02 01:16:41 [logger.py:42] Received request cmpl-2a59ce62fc3a4a29a908597391f110bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:41 [async_llm.py:261] Added request cmpl-2a59ce62fc3a4a29a908597391f110bb-0.
INFO 03-02 01:16:42 [logger.py:42] Received request cmpl-753419f13da849c98524d51c95efcf08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:42 [async_llm.py:261] Added request cmpl-753419f13da849c98524d51c95efcf08-0.
INFO 03-02 01:16:43 [logger.py:42] Received request cmpl-b199414f267446d8aab2f73af3254952-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:43 [async_llm.py:261] Added request cmpl-b199414f267446d8aab2f73af3254952-0.
INFO 03-02 01:16:44 [logger.py:42] Received request cmpl-53068a02e46f416eb7bd7437fa676a10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:44 [async_llm.py:261] Added request cmpl-53068a02e46f416eb7bd7437fa676a10-0.
INFO 03-02 01:16:45 [logger.py:42] Received request cmpl-93d5196947ab44919737a824c221bbe5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:45 [async_llm.py:261] Added request cmpl-93d5196947ab44919737a824c221bbe5-0.
INFO 03-02 01:16:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:16:47 [logger.py:42] Received request cmpl-259fde2b7ad0425e85afcf50c7099c90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:47 [async_llm.py:261] Added request cmpl-259fde2b7ad0425e85afcf50c7099c90-0.
INFO 03-02 01:16:48 [logger.py:42] Received request cmpl-afa08c2f84ab4291984c0d8468865b3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:48 [async_llm.py:261] Added request cmpl-afa08c2f84ab4291984c0d8468865b3d-0.
INFO 03-02 01:16:49 [logger.py:42] Received request cmpl-7069c122f68e4f31863fe0c953987a1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:49 [async_llm.py:261] Added request cmpl-7069c122f68e4f31863fe0c953987a1b-0.
INFO 03-02 01:16:50 [logger.py:42] Received request cmpl-2752a52c51844ca9977e65a4e3a09267-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:50 [async_llm.py:261] Added request cmpl-2752a52c51844ca9977e65a4e3a09267-0.
INFO 03-02 01:16:51 [logger.py:42] Received request cmpl-20b4f2f78b144e5a9513cc6452be9a1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:51 [async_llm.py:261] Added request cmpl-20b4f2f78b144e5a9513cc6452be9a1b-0.
INFO 03-02 01:16:52 [logger.py:42] Received request cmpl-dca703fd25314e9cbcf2d2e544907628-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:52 [async_llm.py:261] Added request cmpl-dca703fd25314e9cbcf2d2e544907628-0.
INFO 03-02 01:16:53 [logger.py:42] Received request cmpl-7385a6440a8c4b93afa5b6276c01a12d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:53 [async_llm.py:261] Added request cmpl-7385a6440a8c4b93afa5b6276c01a12d-0.
INFO 03-02 01:16:54 [logger.py:42] Received request cmpl-56b9479c2bee4d0aba57bec712cf5e87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:54 [async_llm.py:261] Added request cmpl-56b9479c2bee4d0aba57bec712cf5e87-0.
INFO 03-02 01:16:55 [logger.py:42] Received request cmpl-0fb796d0f9114ac0899128a8ef1cec98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:55 [async_llm.py:261] Added request cmpl-0fb796d0f9114ac0899128a8ef1cec98-0.
INFO 03-02 01:16:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:16:56 [logger.py:42] Received request cmpl-b238fdc5e7904827a379ab84b17e0f39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:56 [async_llm.py:261] Added request cmpl-b238fdc5e7904827a379ab84b17e0f39-0.
INFO 03-02 01:16:57 [logger.py:42] Received request cmpl-1bcfd240306e4630bc0d2b107dc5bde9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:57 [async_llm.py:261] Added request cmpl-1bcfd240306e4630bc0d2b107dc5bde9-0.
INFO 03-02 01:16:58 [logger.py:42] Received request cmpl-c293b2327497496d97f11dfdbdeb0a08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:58 [async_llm.py:261] Added request cmpl-c293b2327497496d97f11dfdbdeb0a08-0.
INFO 03-02 01:17:00 [logger.py:42] Received request cmpl-fe1ca5a870d54c269f710ec1448e3d99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:00 [async_llm.py:261] Added request cmpl-fe1ca5a870d54c269f710ec1448e3d99-0.
INFO 03-02 01:17:01 [logger.py:42] Received request cmpl-5ebfcf788517411d815d860975529003-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:01 [async_llm.py:261] Added request cmpl-5ebfcf788517411d815d860975529003-0.
INFO 03-02 01:17:02 [logger.py:42] Received request cmpl-45731f0d2a7d449db9252277aa7a6ef4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:02 [async_llm.py:261] Added request cmpl-45731f0d2a7d449db9252277aa7a6ef4-0.
INFO 03-02 01:17:03 [logger.py:42] Received request cmpl-bd04c467852f444dace852d253fa820b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:03 [async_llm.py:261] Added request cmpl-bd04c467852f444dace852d253fa820b-0.
INFO 03-02 01:17:04 [logger.py:42] Received request cmpl-6dd5e0eb2a92481fba5b2d340618c7b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:04 [async_llm.py:261] Added request cmpl-6dd5e0eb2a92481fba5b2d340618c7b0-0.
INFO 03-02 01:17:05 [logger.py:42] Received request cmpl-9e83506ecbbb4978877469ef4a8cf5c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:05 [async_llm.py:261] Added request cmpl-9e83506ecbbb4978877469ef4a8cf5c2-0.
INFO 03-02 01:17:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:17:06 [logger.py:42] Received request cmpl-25d30ce398344be2934d9906e002b929-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:06 [async_llm.py:261] Added request cmpl-25d30ce398344be2934d9906e002b929-0.
INFO 03-02 01:17:07 [logger.py:42] Received request cmpl-ecad061d3b99491397b9783796e1f116-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:07 [async_llm.py:261] Added request cmpl-ecad061d3b99491397b9783796e1f116-0.
INFO 03-02 01:17:08 [logger.py:42] Received request cmpl-9ef7309298374209b51b4d1e96d536cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:08 [async_llm.py:261] Added request cmpl-9ef7309298374209b51b4d1e96d536cd-0.
INFO 03-02 01:17:09 [logger.py:42] Received request cmpl-97c673d5327f4ad5af1d56afab6abdf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:09 [async_llm.py:261] Added request cmpl-97c673d5327f4ad5af1d56afab6abdf4-0.
INFO 03-02 01:17:10 [logger.py:42] Received request cmpl-728a4ebbd423486e9de3a9de4beb77ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:10 [async_llm.py:261] Added request cmpl-728a4ebbd423486e9de3a9de4beb77ce-0.
INFO 03-02 01:17:11 [logger.py:42] Received request cmpl-0f74a22548114c05845df94768f282cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:11 [async_llm.py:261] Added request cmpl-0f74a22548114c05845df94768f282cc-0.
INFO 03-02 01:17:13 [logger.py:42] Received request cmpl-d5af1f7c5f504c6cbddac1c3b66a7817-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:13 [async_llm.py:261] Added request cmpl-d5af1f7c5f504c6cbddac1c3b66a7817-0.
INFO 03-02 01:17:14 [logger.py:42] Received request cmpl-dc35646266b2419590924f79beafaa96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:14 [async_llm.py:261] Added request cmpl-dc35646266b2419590924f79beafaa96-0.
INFO 03-02 01:17:15 [logger.py:42] Received request cmpl-d8da0bb5b95f4dc2b47716ce656fc912-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:15 [async_llm.py:261] Added request cmpl-d8da0bb5b95f4dc2b47716ce656fc912-0.
INFO 03-02 01:17:16 [logger.py:42] Received request cmpl-8e2b97a3ec4845c6ab226f6dd3722d61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:16 [async_llm.py:261] Added request cmpl-8e2b97a3ec4845c6ab226f6dd3722d61-0.
INFO 03-02 01:17:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:17:17 [logger.py:42] Received request cmpl-aa35ffd208a241f2992a084be2222096-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:17 [async_llm.py:261] Added request cmpl-aa35ffd208a241f2992a084be2222096-0.
INFO 03-02 01:17:18 [logger.py:42] Received request cmpl-8a1e215924834fb5bef80e17ea48d4b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:18 [async_llm.py:261] Added request cmpl-8a1e215924834fb5bef80e17ea48d4b5-0.
INFO 03-02 01:17:19 [logger.py:42] Received request cmpl-e50415f967c14e62addad6a5b681d1a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:19 [async_llm.py:261] Added request cmpl-e50415f967c14e62addad6a5b681d1a6-0.
INFO 03-02 01:17:20 [logger.py:42] Received request cmpl-682e73c825254f48b1180c03c43689b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:20 [async_llm.py:261] Added request cmpl-682e73c825254f48b1180c03c43689b1-0.
INFO 03-02 01:17:21 [logger.py:42] Received request cmpl-a639bde368c645619d70b7272958495b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:21 [async_llm.py:261] Added request cmpl-a639bde368c645619d70b7272958495b-0.
INFO 03-02 01:17:22 [logger.py:42] Received request cmpl-6a1f41f341194e78b00d8c0612c46778-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:22 [async_llm.py:261] Added request cmpl-6a1f41f341194e78b00d8c0612c46778-0.
INFO 03-02 01:17:23 [logger.py:42] Received request cmpl-28139842c38145b294afedc59b99015b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:23 [async_llm.py:261] Added request cmpl-28139842c38145b294afedc59b99015b-0.
INFO 03-02 01:17:24 [logger.py:42] Received request cmpl-789b147496194c1daddf439d561864b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:24 [async_llm.py:261] Added request cmpl-789b147496194c1daddf439d561864b7-0.
INFO 03-02 01:17:26 [logger.py:42] Received request cmpl-d9df544a9d154eca9365af07df551174-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:26 [async_llm.py:261] Added request cmpl-d9df544a9d154eca9365af07df551174-0.
INFO 03-02 01:17:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:17:27 [logger.py:42] Received request cmpl-a9bc703244584dcbb5a22ddeec0681ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:27 [async_llm.py:261] Added request cmpl-a9bc703244584dcbb5a22ddeec0681ec-0.
INFO 03-02 01:17:28 [logger.py:42] Received request cmpl-08249f2d0559419fb91447ee7ad52498-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:28 [async_llm.py:261] Added request cmpl-08249f2d0559419fb91447ee7ad52498-0.
INFO 03-02 01:17:29 [logger.py:42] Received request cmpl-113e1d6e58e54d9eaa8a829076565707-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:29 [async_llm.py:261] Added request cmpl-113e1d6e58e54d9eaa8a829076565707-0.
INFO 03-02 01:17:30 [logger.py:42] Received request cmpl-5642b6adb6a24090a445d893808e0ead-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:30 [async_llm.py:261] Added request cmpl-5642b6adb6a24090a445d893808e0ead-0.
INFO 03-02 01:17:31 [logger.py:42] Received request cmpl-dfa108a46b1646baaeb8a273ac06e0e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:31 [async_llm.py:261] Added request cmpl-dfa108a46b1646baaeb8a273ac06e0e3-0.
INFO 03-02 01:17:32 [logger.py:42] Received request cmpl-4588500d37ed43c2afcebfa815f94cf3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:32 [async_llm.py:261] Added request cmpl-4588500d37ed43c2afcebfa815f94cf3-0.
INFO 03-02 01:17:33 [logger.py:42] Received request cmpl-7617f1e9de4b4febad1bc140c6bf1b88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:33 [async_llm.py:261] Added request cmpl-7617f1e9de4b4febad1bc140c6bf1b88-0.
INFO 03-02 01:17:34 [logger.py:42] Received request cmpl-22847037b30546e4b978d520d2288c3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:34 [async_llm.py:261] Added request cmpl-22847037b30546e4b978d520d2288c3f-0.
INFO 03-02 01:17:35 [logger.py:42] Received request cmpl-b215df86305d4bb1b6d7a902a7c5af2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:35 [async_llm.py:261] Added request cmpl-b215df86305d4bb1b6d7a902a7c5af2d-0.
INFO 03-02 01:17:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:17:36 [logger.py:42] Received request cmpl-00e11f1088a54236ac32509cc773d11d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:36 [async_llm.py:261] Added request cmpl-00e11f1088a54236ac32509cc773d11d-0.
INFO 03-02 01:17:37 [logger.py:42] Received request cmpl-ae1c2a0024094cee811efffb40bdd3cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:37 [async_llm.py:261] Added request cmpl-ae1c2a0024094cee811efffb40bdd3cd-0.
INFO 03-02 01:17:39 [logger.py:42] Received request cmpl-97c380e4e08444b886ea27c62ecc2474-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:39 [async_llm.py:261] Added request cmpl-97c380e4e08444b886ea27c62ecc2474-0.
INFO 03-02 01:17:40 [logger.py:42] Received request cmpl-f0289921573746bd87cce42db82e5bbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:40 [async_llm.py:261] Added request cmpl-f0289921573746bd87cce42db82e5bbe-0.
INFO 03-02 01:17:41 [logger.py:42] Received request cmpl-3d573c2895724a50af9402bd4e88bb90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:41 [async_llm.py:261] Added request cmpl-3d573c2895724a50af9402bd4e88bb90-0.
INFO 03-02 01:17:42 [logger.py:42] Received request cmpl-5d93a17b93e44ef9824c6ff0fbfda65b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:42 [async_llm.py:261] Added request cmpl-5d93a17b93e44ef9824c6ff0fbfda65b-0.
INFO 03-02 01:17:43 [logger.py:42] Received request cmpl-1b1ea5e2572e4fe6b954c77f8452e81a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:43 [async_llm.py:261] Added request cmpl-1b1ea5e2572e4fe6b954c77f8452e81a-0.
INFO 03-02 01:17:44 [logger.py:42] Received request cmpl-54db490fcd854de7ba267febcbf15348-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:44 [async_llm.py:261] Added request cmpl-54db490fcd854de7ba267febcbf15348-0.
INFO 03-02 01:17:45 [logger.py:42] Received request cmpl-5e02b6b0a3bf4f17a301270f267f6a81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:45 [async_llm.py:261] Added request cmpl-5e02b6b0a3bf4f17a301270f267f6a81-0.
INFO 03-02 01:17:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:17:46 [logger.py:42] Received request cmpl-a5f3e86bd55a44929117cac1cc1b99ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:46 [async_llm.py:261] Added request cmpl-a5f3e86bd55a44929117cac1cc1b99ef-0.
INFO 03-02 01:17:47 [logger.py:42] Received request cmpl-5a6d3fcfd3f14f2a98ba27893bea44a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:47 [async_llm.py:261] Added request cmpl-5a6d3fcfd3f14f2a98ba27893bea44a0-0.
INFO 03-02 01:17:48 [logger.py:42] Received request cmpl-2aea0ced690d403abd2a707c514cad1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:48 [async_llm.py:261] Added request cmpl-2aea0ced690d403abd2a707c514cad1d-0.
INFO 03-02 01:17:49 [logger.py:42] Received request cmpl-f083dc9c35864a2e842fd4df9442a052-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:49 [async_llm.py:261] Added request cmpl-f083dc9c35864a2e842fd4df9442a052-0.
INFO 03-02 01:17:50 [logger.py:42] Received request cmpl-9dd2078c3c15479eb66498637f5941a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:50 [async_llm.py:261] Added request cmpl-9dd2078c3c15479eb66498637f5941a8-0.
INFO 03-02 01:17:52 [logger.py:42] Received request cmpl-c7259e48af6c48e388772435db90f0fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:52 [async_llm.py:261] Added request cmpl-c7259e48af6c48e388772435db90f0fb-0.
INFO 03-02 01:17:53 [logger.py:42] Received request cmpl-f397304f6db8416bbe2d23e3819f0cb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:53 [async_llm.py:261] Added request cmpl-f397304f6db8416bbe2d23e3819f0cb7-0.
INFO 03-02 01:17:54 [logger.py:42] Received request cmpl-e6f92e22efe9452596f91141998ad7be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:54 [async_llm.py:261] Added request cmpl-e6f92e22efe9452596f91141998ad7be-0.
INFO 03-02 01:17:55 [logger.py:42] Received request cmpl-c9e6bf803379490386e09c361df35a07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:55 [async_llm.py:261] Added request cmpl-c9e6bf803379490386e09c361df35a07-0.
INFO 03-02 01:17:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:17:56 [logger.py:42] Received request cmpl-4c37264beff347b589d656b2d618cb46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:56 [async_llm.py:261] Added request cmpl-4c37264beff347b589d656b2d618cb46-0.
INFO 03-02 01:17:57 [logger.py:42] Received request cmpl-643621ffc71d4deda0c47edac9c72448-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:57 [async_llm.py:261] Added request cmpl-643621ffc71d4deda0c47edac9c72448-0.
INFO 03-02 01:17:58 [logger.py:42] Received request cmpl-a07fc8a6a016453cb4d7e0515ab7e4f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:58 [async_llm.py:261] Added request cmpl-a07fc8a6a016453cb4d7e0515ab7e4f7-0.
INFO 03-02 01:17:59 [logger.py:42] Received request cmpl-1253e9be3a534b499a4ddecd2f61a340-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:59 [async_llm.py:261] Added request cmpl-1253e9be3a534b499a4ddecd2f61a340-0.
INFO 03-02 01:18:00 [logger.py:42] Received request cmpl-da6c054ce8bb4b45bafeca5cf26a573b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:00 [async_llm.py:261] Added request cmpl-da6c054ce8bb4b45bafeca5cf26a573b-0.
INFO 03-02 01:18:01 [logger.py:42] Received request cmpl-f6a749c4157d4e87aab8f13de87fd098-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:01 [async_llm.py:261] Added request cmpl-f6a749c4157d4e87aab8f13de87fd098-0.
INFO 03-02 01:18:02 [logger.py:42] Received request cmpl-3f84270688504911b4ed3608888d2a85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:02 [async_llm.py:261] Added request cmpl-3f84270688504911b4ed3608888d2a85-0.
INFO 03-02 01:18:03 [logger.py:42] Received request cmpl-3c40a2635c5a477381e99a35d871ce7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:03 [async_llm.py:261] Added request cmpl-3c40a2635c5a477381e99a35d871ce7c-0.
INFO 03-02 01:18:05 [logger.py:42] Received request cmpl-6ba4334d09d040a0997959b2eef82372-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:05 [async_llm.py:261] Added request cmpl-6ba4334d09d040a0997959b2eef82372-0.
INFO 03-02 01:18:06 [logger.py:42] Received request cmpl-d55533074541424aaeab1b2a2a69f0ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:06 [async_llm.py:261] Added request cmpl-d55533074541424aaeab1b2a2a69f0ad-0.
INFO 03-02 01:18:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:18:07 [logger.py:42] Received request cmpl-d8fb4b4cff8a472c82550a7375e2703a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:07 [async_llm.py:261] Added request cmpl-d8fb4b4cff8a472c82550a7375e2703a-0.
INFO 03-02 01:18:08 [logger.py:42] Received request cmpl-654572a92f7f46288ab61ab158949544-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:08 [async_llm.py:261] Added request cmpl-654572a92f7f46288ab61ab158949544-0.
INFO 03-02 01:18:09 [logger.py:42] Received request cmpl-e311ad078f614b59b0391b6a03342e67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:09 [async_llm.py:261] Added request cmpl-e311ad078f614b59b0391b6a03342e67-0.
INFO 03-02 01:18:10 [logger.py:42] Received request cmpl-2b94b372a637457695e52a2fa0b94a74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:10 [async_llm.py:261] Added request cmpl-2b94b372a637457695e52a2fa0b94a74-0.
INFO 03-02 01:18:11 [logger.py:42] Received request cmpl-7bb4a76d0caf4d878974711c9521f0eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:11 [async_llm.py:261] Added request cmpl-7bb4a76d0caf4d878974711c9521f0eb-0.
INFO 03-02 01:18:12 [logger.py:42] Received request cmpl-00f63f8eb90349c69b23de8bb8102772-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:12 [async_llm.py:261] Added request cmpl-00f63f8eb90349c69b23de8bb8102772-0.
INFO 03-02 01:18:13 [logger.py:42] Received request cmpl-1e822a2779ad4835acaddc6132c87a36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:13 [async_llm.py:261] Added request cmpl-1e822a2779ad4835acaddc6132c87a36-0.
INFO 03-02 01:18:14 [logger.py:42] Received request cmpl-529a81bf4437419ca072e6f1aa3d5600-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:14 [async_llm.py:261] Added request cmpl-529a81bf4437419ca072e6f1aa3d5600-0.
INFO 03-02 01:18:15 [logger.py:42] Received request cmpl-1acada2f5a98406bb8d5f1ec3bcb2a66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:15 [async_llm.py:261] Added request cmpl-1acada2f5a98406bb8d5f1ec3bcb2a66-0.
INFO 03-02 01:18:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:18:17 [logger.py:42] Received request cmpl-06654d3b1b9b42fcaf60a14191554575-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:17 [async_llm.py:261] Added request cmpl-06654d3b1b9b42fcaf60a14191554575-0.
INFO 03-02 01:18:18 [logger.py:42] Received request cmpl-2188172e07e74e0f882e17dea2eaff99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:18 [async_llm.py:261] Added request cmpl-2188172e07e74e0f882e17dea2eaff99-0.
INFO 03-02 01:18:19 [logger.py:42] Received request cmpl-1db7dac9396c438494ccbe37d2872218-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:19 [async_llm.py:261] Added request cmpl-1db7dac9396c438494ccbe37d2872218-0.
INFO 03-02 01:18:20 [logger.py:42] Received request cmpl-2fc65e94a3494a399e9d497efdcc5592-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:20 [async_llm.py:261] Added request cmpl-2fc65e94a3494a399e9d497efdcc5592-0.
INFO 03-02 01:18:21 [logger.py:42] Received request cmpl-3ec184b23a93444ea8a3c9af224da730-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:21 [async_llm.py:261] Added request cmpl-3ec184b23a93444ea8a3c9af224da730-0.
INFO 03-02 01:18:22 [logger.py:42] Received request cmpl-2b4e3ac224b64121b66760b3164eb082-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:22 [async_llm.py:261] Added request cmpl-2b4e3ac224b64121b66760b3164eb082-0.
INFO 03-02 01:18:23 [logger.py:42] Received request cmpl-74d4ef411b474e26abc7328189cbe557-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:23 [async_llm.py:261] Added request cmpl-74d4ef411b474e26abc7328189cbe557-0.
INFO 03-02 01:18:24 [logger.py:42] Received request cmpl-caae4be35b7f4b039cf1dfee36b0004b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:24 [async_llm.py:261] Added request cmpl-caae4be35b7f4b039cf1dfee36b0004b-0.
INFO 03-02 01:18:25 [logger.py:42] Received request cmpl-3f64d697ce04462f966b3f93536009b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:25 [async_llm.py:261] Added request cmpl-3f64d697ce04462f966b3f93536009b4-0.
INFO 03-02 01:18:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:18:26 [logger.py:42] Received request cmpl-2bd64f0972bb4878af95a785d6a1d7e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:26 [async_llm.py:261] Added request cmpl-2bd64f0972bb4878af95a785d6a1d7e2-0.
INFO 03-02 01:18:27 [logger.py:42] Received request cmpl-14a14ed96ccc44a89fac5ee29b29a2e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:27 [async_llm.py:261] Added request cmpl-14a14ed96ccc44a89fac5ee29b29a2e7-0.
INFO 03-02 01:18:28 [logger.py:42] Received request cmpl-aa6534c9459c4586a74afea6805c9801-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:28 [async_llm.py:261] Added request cmpl-aa6534c9459c4586a74afea6805c9801-0.
INFO 03-02 01:18:30 [logger.py:42] Received request cmpl-f6c4d79ddd604c1fa03bcb795481aebc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:30 [async_llm.py:261] Added request cmpl-f6c4d79ddd604c1fa03bcb795481aebc-0.
INFO 03-02 01:18:31 [logger.py:42] Received request cmpl-c10e4e4cea954b278eed96ddc1cbe17d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:31 [async_llm.py:261] Added request cmpl-c10e4e4cea954b278eed96ddc1cbe17d-0.
INFO 03-02 01:18:32 [logger.py:42] Received request cmpl-c11a554566034eb189ba2f8867373330-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:32 [async_llm.py:261] Added request cmpl-c11a554566034eb189ba2f8867373330-0.
INFO 03-02 01:18:33 [logger.py:42] Received request cmpl-402b6d22748b430d83dce92bf14c26b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:33 [async_llm.py:261] Added request cmpl-402b6d22748b430d83dce92bf14c26b6-0.
INFO 03-02 01:18:34 [logger.py:42] Received request cmpl-827b5f84b21c40a085068520b13870ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:34 [async_llm.py:261] Added request cmpl-827b5f84b21c40a085068520b13870ec-0.
INFO 03-02 01:18:35 [logger.py:42] Received request cmpl-3939dac549b445cfa6837f5bd75912c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:35 [async_llm.py:261] Added request cmpl-3939dac549b445cfa6837f5bd75912c0-0.
INFO 03-02 01:18:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:18:36 [logger.py:42] Received request cmpl-0f1817cf8b734270a3104a4e6ac30073-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:36 [async_llm.py:261] Added request cmpl-0f1817cf8b734270a3104a4e6ac30073-0.
INFO 03-02 01:18:37 [logger.py:42] Received request cmpl-f424fb0b9ba54314bfe71e43069fa5a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:37 [async_llm.py:261] Added request cmpl-f424fb0b9ba54314bfe71e43069fa5a4-0.
INFO 03-02 01:18:38 [logger.py:42] Received request cmpl-12a2b4962d714550a36e2210da9e4f16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:38 [async_llm.py:261] Added request cmpl-12a2b4962d714550a36e2210da9e4f16-0.
INFO 03-02 01:18:39 [logger.py:42] Received request cmpl-474be400feb448aea4f123ec74de460a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:39 [async_llm.py:261] Added request cmpl-474be400feb448aea4f123ec74de460a-0.
INFO 03-02 01:18:40 [logger.py:42] Received request cmpl-9f76246e0bd24d389102ca3e4594b141-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:40 [async_llm.py:261] Added request cmpl-9f76246e0bd24d389102ca3e4594b141-0.
INFO 03-02 01:18:41 [logger.py:42] Received request cmpl-a8ff3e4941364a42bef3b223d5a4048f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:41 [async_llm.py:261] Added request cmpl-a8ff3e4941364a42bef3b223d5a4048f-0.
INFO 03-02 01:18:43 [logger.py:42] Received request cmpl-8d9a22bc5bec4e4380ae8f45ed177c8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:43 [async_llm.py:261] Added request cmpl-8d9a22bc5bec4e4380ae8f45ed177c8d-0.
INFO 03-02 01:18:44 [logger.py:42] Received request cmpl-fc72f1d401b14e5f80c7822389699365-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:44 [async_llm.py:261] Added request cmpl-fc72f1d401b14e5f80c7822389699365-0.
INFO 03-02 01:18:45 [logger.py:42] Received request cmpl-84db78a89f6e4dbd8bb4c71937f65332-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:45 [async_llm.py:261] Added request cmpl-84db78a89f6e4dbd8bb4c71937f65332-0.
INFO 03-02 01:18:46 [logger.py:42] Received request cmpl-198383461ff14efc90481de904123e16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:46 [async_llm.py:261] Added request cmpl-198383461ff14efc90481de904123e16-0.
INFO 03-02 01:18:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:18:47 [logger.py:42] Received request cmpl-71c1e6d622fd4ad59f474a3e68d3d384-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:47 [async_llm.py:261] Added request cmpl-71c1e6d622fd4ad59f474a3e68d3d384-0.
INFO 03-02 01:18:48 [logger.py:42] Received request cmpl-1e0cff6d36984398bd56cbadd5b21fa0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:48 [async_llm.py:261] Added request cmpl-1e0cff6d36984398bd56cbadd5b21fa0-0.
INFO 03-02 01:18:49 [logger.py:42] Received request cmpl-ee2c5f7995a946f69b7a8ce1e940afa6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:49 [async_llm.py:261] Added request cmpl-ee2c5f7995a946f69b7a8ce1e940afa6-0.
INFO 03-02 01:18:50 [logger.py:42] Received request cmpl-344b0bbc2c9a418abdda5d9f07793c19-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:50 [async_llm.py:261] Added request cmpl-344b0bbc2c9a418abdda5d9f07793c19-0.
INFO 03-02 01:18:51 [logger.py:42] Received request cmpl-d2f26aa9c34e433c90ab1f8a5ad5aa65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:51 [async_llm.py:261] Added request cmpl-d2f26aa9c34e433c90ab1f8a5ad5aa65-0.
INFO 03-02 01:18:52 [logger.py:42] Received request cmpl-5f46e2a498274181ad5276f282ef7cee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:52 [async_llm.py:261] Added request cmpl-5f46e2a498274181ad5276f282ef7cee-0.
INFO 03-02 01:18:53 [logger.py:42] Received request cmpl-c6cd050618504d38ad40ae283973d665-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:53 [async_llm.py:261] Added request cmpl-c6cd050618504d38ad40ae283973d665-0.
INFO 03-02 01:18:54 [logger.py:42] Received request cmpl-57ded52c379e483bbba592680ac87a73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:54 [async_llm.py:261] Added request cmpl-57ded52c379e483bbba592680ac87a73-0.
INFO 03-02 01:18:56 [logger.py:42] Received request cmpl-bce7dfd82346440596337099e26c9bcb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:56 [async_llm.py:261] Added request cmpl-bce7dfd82346440596337099e26c9bcb-0.
INFO 03-02 01:18:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:18:57 [logger.py:42] Received request cmpl-af3dfa5dfba8470a861638a6f99d78c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:57 [async_llm.py:261] Added request cmpl-af3dfa5dfba8470a861638a6f99d78c3-0.
INFO 03-02 01:18:58 [logger.py:42] Received request cmpl-88c078bfde8845fcb5f300abdeda4dee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:58 [async_llm.py:261] Added request cmpl-88c078bfde8845fcb5f300abdeda4dee-0.
INFO 03-02 01:18:59 [logger.py:42] Received request cmpl-c8787fcc9bd74bb69b237cdc11ad53ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:59 [async_llm.py:261] Added request cmpl-c8787fcc9bd74bb69b237cdc11ad53ef-0.
INFO 03-02 01:19:00 [logger.py:42] Received request cmpl-f9a7ceb3ce784a549fa9c5294e371116-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:00 [async_llm.py:261] Added request cmpl-f9a7ceb3ce784a549fa9c5294e371116-0.
INFO 03-02 01:19:01 [logger.py:42] Received request cmpl-63ed19fe115b4783be4952500060fd6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:01 [async_llm.py:261] Added request cmpl-63ed19fe115b4783be4952500060fd6e-0.
INFO 03-02 01:19:02 [logger.py:42] Received request cmpl-7d12b04d860848a596a1fbf2423758d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:02 [async_llm.py:261] Added request cmpl-7d12b04d860848a596a1fbf2423758d2-0.
INFO 03-02 01:19:03 [logger.py:42] Received request cmpl-69d57ffafb6d41e785d97ead593e01a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:03 [async_llm.py:261] Added request cmpl-69d57ffafb6d41e785d97ead593e01a6-0.
INFO 03-02 01:19:04 [logger.py:42] Received request cmpl-18109ae303594385ab121fb5fd8b9141-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:04 [async_llm.py:261] Added request cmpl-18109ae303594385ab121fb5fd8b9141-0.
INFO 03-02 01:19:05 [logger.py:42] Received request cmpl-bf1d36526d3843e581d451251c622304-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:05 [async_llm.py:261] Added request cmpl-bf1d36526d3843e581d451251c622304-0.
INFO 03-02 01:19:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:19:06 [logger.py:42] Received request cmpl-702229fd42e846929acd4e3834c2b9a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:06 [async_llm.py:261] Added request cmpl-702229fd42e846929acd4e3834c2b9a4-0.
INFO 03-02 01:19:07 [logger.py:42] Received request cmpl-4647b246a5c04109a0e04b8be7bcee87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:07 [async_llm.py:261] Added request cmpl-4647b246a5c04109a0e04b8be7bcee87-0.
INFO 03-02 01:19:09 [logger.py:42] Received request cmpl-2b06f7fe9b9d4dc89dbe30309f37c9b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:09 [async_llm.py:261] Added request cmpl-2b06f7fe9b9d4dc89dbe30309f37c9b4-0.
INFO 03-02 01:19:10 [logger.py:42] Received request cmpl-8671a6e99ce7489f8171d54a2275200b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:10 [async_llm.py:261] Added request cmpl-8671a6e99ce7489f8171d54a2275200b-0.
INFO 03-02 01:19:11 [logger.py:42] Received request cmpl-bdae6eb4e20b43729839a78c79f00b5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:11 [async_llm.py:261] Added request cmpl-bdae6eb4e20b43729839a78c79f00b5f-0.
INFO 03-02 01:19:12 [logger.py:42] Received request cmpl-5e25ac949b904a5cbd1487dc3b4de334-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:12 [async_llm.py:261] Added request cmpl-5e25ac949b904a5cbd1487dc3b4de334-0.
INFO 03-02 01:19:13 [logger.py:42] Received request cmpl-3cb0b89641d44d25aec735b37c9456c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:13 [async_llm.py:261] Added request cmpl-3cb0b89641d44d25aec735b37c9456c5-0.
INFO 03-02 01:19:14 [logger.py:42] Received request cmpl-b1cb0df2fcc94302b46079a9113e2694-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:14 [async_llm.py:261] Added request cmpl-b1cb0df2fcc94302b46079a9113e2694-0.
INFO 03-02 01:19:15 [logger.py:42] Received request cmpl-b7db75267ebd4459b9e30673ea44176d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:15 [async_llm.py:261] Added request cmpl-b7db75267ebd4459b9e30673ea44176d-0.
INFO 03-02 01:19:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:19:16 [logger.py:42] Received request cmpl-777e59594dc54138b7a86c3b20bc9f06-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:16 [async_llm.py:261] Added request cmpl-777e59594dc54138b7a86c3b20bc9f06-0.
INFO 03-02 01:19:17 [logger.py:42] Received request cmpl-e38f6ee80380427facbce551de79acb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:17 [async_llm.py:261] Added request cmpl-e38f6ee80380427facbce551de79acb9-0.
INFO 03-02 01:19:18 [logger.py:42] Received request cmpl-5f09a55fdde44f9cb2da67b5a1af9544-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:18 [async_llm.py:261] Added request cmpl-5f09a55fdde44f9cb2da67b5a1af9544-0.
INFO 03-02 01:19:19 [logger.py:42] Received request cmpl-666380395e174b778fe9988683e33440-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:19 [async_llm.py:261] Added request cmpl-666380395e174b778fe9988683e33440-0.
INFO 03-02 01:19:20 [logger.py:42] Received request cmpl-c4517ebc67974f6dadb280a0f2a78b13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:20 [async_llm.py:261] Added request cmpl-c4517ebc67974f6dadb280a0f2a78b13-0.
INFO 03-02 01:19:22 [logger.py:42] Received request cmpl-54bcc184a1674ee6b8f61c74f614e2e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:22 [async_llm.py:261] Added request cmpl-54bcc184a1674ee6b8f61c74f614e2e6-0.
INFO 03-02 01:19:23 [logger.py:42] Received request cmpl-0754b56275a240ddae8ff6fb6d6b44f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:23 [async_llm.py:261] Added request cmpl-0754b56275a240ddae8ff6fb6d6b44f5-0.
INFO 03-02 01:19:24 [logger.py:42] Received request cmpl-aae32e6c3a854515a6863b1f7de4f62b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:24 [async_llm.py:261] Added request cmpl-aae32e6c3a854515a6863b1f7de4f62b-0.
INFO 03-02 01:19:25 [logger.py:42] Received request cmpl-b93184217d5346eaa4d4e90ca27e3de9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:25 [async_llm.py:261] Added request cmpl-b93184217d5346eaa4d4e90ca27e3de9-0.
INFO 03-02 01:19:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:19:26 [logger.py:42] Received request cmpl-6c00f5a330e94670a17892aa128dbfc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:26 [async_llm.py:261] Added request cmpl-6c00f5a330e94670a17892aa128dbfc1-0.
INFO 03-02 01:19:27 [logger.py:42] Received request cmpl-fa8d9b9bcb2a48efa033192ea0c064d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:27 [async_llm.py:261] Added request cmpl-fa8d9b9bcb2a48efa033192ea0c064d4-0.
INFO 03-02 01:19:28 [logger.py:42] Received request cmpl-26a87d4ae5ad4055b3a0ec7fff7e8fd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:28 [async_llm.py:261] Added request cmpl-26a87d4ae5ad4055b3a0ec7fff7e8fd1-0.
INFO 03-02 01:19:29 [logger.py:42] Received request cmpl-b1f35b1e83ea426f99f06a07bc61c98e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:29 [async_llm.py:261] Added request cmpl-b1f35b1e83ea426f99f06a07bc61c98e-0.
INFO 03-02 01:19:30 [logger.py:42] Received request cmpl-b12f2837ce98430990510626aa0100b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:30 [async_llm.py:261] Added request cmpl-b12f2837ce98430990510626aa0100b8-0.
INFO 03-02 01:19:31 [logger.py:42] Received request cmpl-8212b205739842fb822429dc6ba91966-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:31 [async_llm.py:261] Added request cmpl-8212b205739842fb822429dc6ba91966-0.
INFO 03-02 01:19:32 [logger.py:42] Received request cmpl-bc6e28bc79474a65ae27b80b8d1c102f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:32 [async_llm.py:261] Added request cmpl-bc6e28bc79474a65ae27b80b8d1c102f-0.
INFO 03-02 01:19:33 [logger.py:42] Received request cmpl-b1bdda5f99f4401692ce597752313afc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:33 [async_llm.py:261] Added request cmpl-b1bdda5f99f4401692ce597752313afc-0.
INFO 03-02 01:19:35 [logger.py:42] Received request cmpl-28145e31b15c4a30a5358616dbdc12ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:35 [async_llm.py:261] Added request cmpl-28145e31b15c4a30a5358616dbdc12ae-0.
INFO 03-02 01:19:36 [logger.py:42] Received request cmpl-cc44b6aca428490a84cc96bd554149c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:36 [async_llm.py:261] Added request cmpl-cc44b6aca428490a84cc96bd554149c3-0.
INFO 03-02 01:19:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:19:37 [logger.py:42] Received request cmpl-7234d9c8954e40a085f20db05666199a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:37 [async_llm.py:261] Added request cmpl-7234d9c8954e40a085f20db05666199a-0.
INFO 03-02 01:19:38 [logger.py:42] Received request cmpl-ffe4bfcce64b4aa0b677b561551efb35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:38 [async_llm.py:261] Added request cmpl-ffe4bfcce64b4aa0b677b561551efb35-0.
INFO 03-02 01:19:39 [logger.py:42] Received request cmpl-d33572c3daa94d14bc3c4bff62b12a9f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:39 [async_llm.py:261] Added request cmpl-d33572c3daa94d14bc3c4bff62b12a9f-0.
INFO 03-02 01:19:40 [logger.py:42] Received request cmpl-95f96449960c44b49bfe333b40d83e8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:40 [async_llm.py:261] Added request cmpl-95f96449960c44b49bfe333b40d83e8c-0.
INFO 03-02 01:19:41 [logger.py:42] Received request cmpl-f4df06abdb484c7b9d66de73b4d8be7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:41 [async_llm.py:261] Added request cmpl-f4df06abdb484c7b9d66de73b4d8be7e-0.
INFO 03-02 01:19:42 [logger.py:42] Received request cmpl-6dfc226db3f74ffa91ddf963751c0fa6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:42 [async_llm.py:261] Added request cmpl-6dfc226db3f74ffa91ddf963751c0fa6-0.
INFO 03-02 01:19:43 [logger.py:42] Received request cmpl-4a9dfa89af514d8887aa06a4c505d503-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:43 [async_llm.py:261] Added request cmpl-4a9dfa89af514d8887aa06a4c505d503-0.
INFO 03-02 01:19:44 [logger.py:42] Received request cmpl-9a7ab032d99a4e94a415715854a2616b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:44 [async_llm.py:261] Added request cmpl-9a7ab032d99a4e94a415715854a2616b-0.
INFO 03-02 01:19:45 [logger.py:42] Received request cmpl-4118ca7027d847839aee6217d717b97a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:45 [async_llm.py:261] Added request cmpl-4118ca7027d847839aee6217d717b97a-0.
INFO 03-02 01:19:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:19:46 [logger.py:42] Received request cmpl-c30747301a4048d7bba0f4dc85de8713-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:46 [async_llm.py:261] Added request cmpl-c30747301a4048d7bba0f4dc85de8713-0.
INFO 03-02 01:19:48 [logger.py:42] Received request cmpl-234b3e2b607c4b84adabab26028fef9a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:48 [async_llm.py:261] Added request cmpl-234b3e2b607c4b84adabab26028fef9a-0.
INFO 03-02 01:19:49 [logger.py:42] Received request cmpl-bab1b8616070477a9949a1c40caf5065-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:49 [async_llm.py:261] Added request cmpl-bab1b8616070477a9949a1c40caf5065-0.
INFO 03-02 01:19:50 [logger.py:42] Received request cmpl-d524952cd5e74f76ad292d578bcea8cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:50 [async_llm.py:261] Added request cmpl-d524952cd5e74f76ad292d578bcea8cc-0.
INFO 03-02 01:19:51 [logger.py:42] Received request cmpl-1e1a4c413172489eb5fbbc480bb05ece-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:51 [async_llm.py:261] Added request cmpl-1e1a4c413172489eb5fbbc480bb05ece-0.
INFO 03-02 01:19:52 [logger.py:42] Received request cmpl-f534ab145b3f472f8565138c6638af55-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:52 [async_llm.py:261] Added request cmpl-f534ab145b3f472f8565138c6638af55-0.
INFO 03-02 01:19:53 [logger.py:42] Received request cmpl-ccde5c20e09040ce93809f751adffa1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:53 [async_llm.py:261] Added request cmpl-ccde5c20e09040ce93809f751adffa1e-0.
INFO 03-02 01:19:54 [logger.py:42] Received request cmpl-b7a92465466941e8b2ea6bd3b63b5d2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:54 [async_llm.py:261] Added request cmpl-b7a92465466941e8b2ea6bd3b63b5d2a-0.
INFO 03-02 01:19:55 [logger.py:42] Received request cmpl-555616d75678447eab911958ff9a0a81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:55 [async_llm.py:261] Added request cmpl-555616d75678447eab911958ff9a0a81-0.
INFO 03-02 01:19:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:19:56 [logger.py:42] Received request cmpl-32a4abd283fe4aa19460ac8407a72d5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:56 [async_llm.py:261] Added request cmpl-32a4abd283fe4aa19460ac8407a72d5d-0.
INFO 03-02 01:19:57 [logger.py:42] Received request cmpl-bf578ba32c8d48c6bea151e60063dcf9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:57 [async_llm.py:261] Added request cmpl-bf578ba32c8d48c6bea151e60063dcf9-0.
INFO 03-02 01:19:58 [logger.py:42] Received request cmpl-2eff1dbed7314891bd3f39727067226b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:58 [async_llm.py:261] Added request cmpl-2eff1dbed7314891bd3f39727067226b-0.
INFO 03-02 01:19:59 [logger.py:42] Received request cmpl-2b467e23b1ef4b3e81d2c9830a235569-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:59 [async_llm.py:261] Added request cmpl-2b467e23b1ef4b3e81d2c9830a235569-0.
INFO 03-02 01:20:01 [logger.py:42] Received request cmpl-da288ac350e247479b2b4ec1264bdc7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:01 [async_llm.py:261] Added request cmpl-da288ac350e247479b2b4ec1264bdc7b-0.
INFO 03-02 01:20:02 [logger.py:42] Received request cmpl-4091081ac3ce4a0791ab59bb208f7730-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:02 [async_llm.py:261] Added request cmpl-4091081ac3ce4a0791ab59bb208f7730-0.
INFO 03-02 01:20:03 [logger.py:42] Received request cmpl-a0be475c883242779a4c77748397f7c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:03 [async_llm.py:261] Added request cmpl-a0be475c883242779a4c77748397f7c6-0.
INFO 03-02 01:20:04 [logger.py:42] Received request cmpl-eced3f1c4cec42dfbdcb711e2f3042d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:04 [async_llm.py:261] Added request cmpl-eced3f1c4cec42dfbdcb711e2f3042d5-0.
INFO 03-02 01:20:05 [logger.py:42] Received request cmpl-beea9e21d06f4d4fab60aea57e2f2fea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:05 [async_llm.py:261] Added request cmpl-beea9e21d06f4d4fab60aea57e2f2fea-0.
INFO 03-02 01:20:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:20:06 [logger.py:42] Received request cmpl-745cebd6642543fc913246ee47d1e1c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:06 [async_llm.py:261] Added request cmpl-745cebd6642543fc913246ee47d1e1c9-0.
INFO:  1.2.3.4:123 - "POST /v1/completions HTTP/1.1" 404 Not Found
INFO 03-02 01:20:07 [logger.py:42] Received request cmpl-8f0665e91dfd4443b5a0b93d32b9f098-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:07 [async_llm.py:261] Added request cmpl-8f0665e91dfd4443b5a0b93d32b9f098-0.
INFO 03-02 01:20:08 [logger.py:42] Received request cmpl-bc2fdf06244740b4944bdd4e9176eacc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:08 [async_llm.py:261] Added request cmpl-bc2fdf06244740b4944bdd4e9176eacc-0.
INFO 03-02 01:20:09 [logger.py:42] Received request cmpl-3d4f07838f9045839b4af4956a73ed89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:09 [async_llm.py:261] Added request cmpl-3d4f07838f9045839b4af4956a73ed89-0.
INFO 03-02 01:20:10 [logger.py:42] Received request cmpl-5197b34b4db34c408e56c0acffa35bb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:10 [async_llm.py:261] Added request cmpl-5197b34b4db34c408e56c0acffa35bb3-0.
INFO 03-02 01:20:11 [logger.py:42] Received request cmpl-51a8bf1ac1a843da913d99e3be5bdb2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:11 [async_llm.py:261] Added request cmpl-51a8bf1ac1a843da913d99e3be5bdb2f-0.
INFO 03-02 01:20:13 [logger.py:42] Received request cmpl-2a6e92fe2f474de0bead071b417e283a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:13 [async_llm.py:261] Added request cmpl-2a6e92fe2f474de0bead071b417e283a-0.
INFO 03-02 01:20:14 [logger.py:42] Received request cmpl-a64b7b5d72fc4342bddf48114cd8bfb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:14 [async_llm.py:261] Added request cmpl-a64b7b5d72fc4342bddf48114cd8bfb0-0.
INFO 03-02 01:20:15 [logger.py:42] Received request cmpl-a8312b0b15f84aee8b5b2ce3a05c550b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:15 [async_llm.py:261] Added request cmpl-a8312b0b15f84aee8b5b2ce3a05c550b-0.
INFO 03-02 01:20:16 [logger.py:42] Received request cmpl-986e52cbe5334829969434820752da3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:16 [async_llm.py:261] Added request cmpl-986e52cbe5334829969434820752da3b-0.
INFO 03-02 01:20:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:20:17 [logger.py:42] Received request cmpl-7c303108d6974a46aff763a3676cda8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:17 [async_llm.py:261] Added request cmpl-7c303108d6974a46aff763a3676cda8f-0.
INFO 03-02 01:20:18 [logger.py:42] Received request cmpl-5b8c980195d941d2a6c81bfb7c2c35f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:18 [async_llm.py:261] Added request cmpl-5b8c980195d941d2a6c81bfb7c2c35f0-0.
INFO 03-02 01:20:19 [logger.py:42] Received request cmpl-9d139b742a854e669e362a4b7ae0ac46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:19 [async_llm.py:261] Added request cmpl-9d139b742a854e669e362a4b7ae0ac46-0.
INFO 03-02 01:20:20 [logger.py:42] Received request cmpl-94994a6ea91b4794822437d2f29bb0a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:20 [async_llm.py:261] Added request cmpl-94994a6ea91b4794822437d2f29bb0a2-0.
INFO 03-02 01:20:21 [logger.py:42] Received request cmpl-00491d2e4f0b4533a4b7a8cb3d1ddcf8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:21 [async_llm.py:261] Added request cmpl-00491d2e4f0b4533a4b7a8cb3d1ddcf8-0.
INFO 03-02 01:20:22 [logger.py:42] Received request cmpl-30b56df6e39548968a59327f98d6e812-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:22 [async_llm.py:261] Added request cmpl-30b56df6e39548968a59327f98d6e812-0.
INFO 03-02 01:20:23 [logger.py:42] Received request cmpl-0a710d25dd9946708c88b984950c096c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:23 [async_llm.py:261] Added request cmpl-0a710d25dd9946708c88b984950c096c-0.
INFO 03-02 01:20:24 [logger.py:42] Received request cmpl-5365db67f6604d2097732b5ad81aab40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:24 [async_llm.py:261] Added request cmpl-5365db67f6604d2097732b5ad81aab40-0.
INFO 03-02 01:20:26 [logger.py:42] Received request cmpl-53259edaec964b3d93519c1c79519053-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:26 [async_llm.py:261] Added request cmpl-53259edaec964b3d93519c1c79519053-0.
INFO 03-02 01:20:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:20:27 [logger.py:42] Received request cmpl-6fb787fea7bc467f88c79d0f6bdf0a5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:27 [async_llm.py:261] Added request cmpl-6fb787fea7bc467f88c79d0f6bdf0a5b-0.
INFO 03-02 01:20:28 [logger.py:42] Received request cmpl-ed3415bc721349f5aed54c2c09f340f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:28 [async_llm.py:261] Added request cmpl-ed3415bc721349f5aed54c2c09f340f1-0.
INFO 03-02 01:20:29 [logger.py:42] Received request cmpl-0b55cf4d5cb148e78df56fe246524660-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:29 [async_llm.py:261] Added request cmpl-0b55cf4d5cb148e78df56fe246524660-0.
INFO 03-02 01:20:30 [logger.py:42] Received request cmpl-c662e659369a47799414e892bba9c6c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:30 [async_llm.py:261] Added request cmpl-c662e659369a47799414e892bba9c6c4-0.
INFO 03-02 01:20:31 [logger.py:42] Received request cmpl-083ce0e60b9d4aa1a2b21755ac2eab76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:31 [async_llm.py:261] Added request cmpl-083ce0e60b9d4aa1a2b21755ac2eab76-0.
INFO 03-02 01:20:32 [logger.py:42] Received request cmpl-aa33b9329f6d4daeaeeeac0b3f877faa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:32 [async_llm.py:261] Added request cmpl-aa33b9329f6d4daeaeeeac0b3f877faa-0.
INFO 03-02 01:20:33 [logger.py:42] Received request cmpl-6148247666c041fab790f08815bd968d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:33 [async_llm.py:261] Added request cmpl-6148247666c041fab790f08815bd968d-0.
INFO 03-02 01:20:34 [logger.py:42] Received request cmpl-ce085d1020b7453685afeddb0ad4d398-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:34 [async_llm.py:261] Added request cmpl-ce085d1020b7453685afeddb0ad4d398-0.
INFO 03-02 01:20:35 [logger.py:42] Received request cmpl-e26e30071a164b4d841fc91c815315c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:35 [async_llm.py:261] Added request cmpl-e26e30071a164b4d841fc91c815315c4-0.
INFO 03-02 01:20:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:20:36 [logger.py:42] Received request cmpl-173a82fbdb72463eacc328694a7691ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:36 [async_llm.py:261] Added request cmpl-173a82fbdb72463eacc328694a7691ec-0.
INFO 03-02 01:20:37 [logger.py:42] Received request cmpl-d8a907d303474c65bf8ab1258c4470a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:37 [async_llm.py:261] Added request cmpl-d8a907d303474c65bf8ab1258c4470a1-0.
INFO 03-02 01:20:39 [logger.py:42] Received request cmpl-8d8a3a7645ad466b88c4a53f71d6decb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:39 [async_llm.py:261] Added request cmpl-8d8a3a7645ad466b88c4a53f71d6decb-0.
INFO 03-02 01:20:40 [logger.py:42] Received request cmpl-8ee6aaf7eaf34c0f935f6ee4baa4b25d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:40 [async_llm.py:261] Added request cmpl-8ee6aaf7eaf34c0f935f6ee4baa4b25d-0.
INFO 03-02 01:20:41 [logger.py:42] Received request cmpl-fa97320946e241d281c41232b6e1ac2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:41 [async_llm.py:261] Added request cmpl-fa97320946e241d281c41232b6e1ac2c-0.
INFO 03-02 01:20:42 [logger.py:42] Received request cmpl-dda95ec7fe6f4cf492bc499c716288e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:42 [async_llm.py:261] Added request cmpl-dda95ec7fe6f4cf492bc499c716288e2-0.
INFO 03-02 01:20:43 [logger.py:42] Received request cmpl-b85b6ece11d045ca8ba216a4ca49507c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:43 [async_llm.py:261] Added request cmpl-b85b6ece11d045ca8ba216a4ca49507c-0.
INFO 03-02 01:20:44 [logger.py:42] Received request cmpl-39e75310b48d4a3495b82ec18ba9bc5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:44 [async_llm.py:261] Added request cmpl-39e75310b48d4a3495b82ec18ba9bc5f-0.
INFO 03-02 01:20:45 [logger.py:42] Received request cmpl-3f012d91f52f475e8273df04f595cbcd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:45 [async_llm.py:261] Added request cmpl-3f012d91f52f475e8273df04f595cbcd-0.
INFO 03-02 01:20:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:20:46 [logger.py:42] Received request cmpl-d14b25144ede446fb4f183a26a834c54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:46 [async_llm.py:261] Added request cmpl-d14b25144ede446fb4f183a26a834c54-0.
INFO 03-02 01:20:47 [logger.py:42] Received request cmpl-36261ce363dd4d758914d52ea687802e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:47 [async_llm.py:261] Added request cmpl-36261ce363dd4d758914d52ea687802e-0.
INFO 03-02 01:20:48 [logger.py:42] Received request cmpl-58e859152498462a81b4218b486db7a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:48 [async_llm.py:261] Added request cmpl-58e859152498462a81b4218b486db7a0-0.
INFO 03-02 01:20:49 [logger.py:42] Received request cmpl-37fe620e50f740d5a23edfff27a9c31c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:49 [async_llm.py:261] Added request cmpl-37fe620e50f740d5a23edfff27a9c31c-0.
INFO 03-02 01:20:50 [logger.py:42] Received request cmpl-944c2fc39649452c808a254bc2373398-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:50 [async_llm.py:261] Added request cmpl-944c2fc39649452c808a254bc2373398-0.
INFO 03-02 01:20:52 [logger.py:42] Received request cmpl-bd2590de8934460b831512a9967509fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:52 [async_llm.py:261] Added request cmpl-bd2590de8934460b831512a9967509fb-0.
INFO 03-02 01:20:53 [logger.py:42] Received request cmpl-ef0bb957ae754c29b56375146960db09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:53 [async_llm.py:261] Added request cmpl-ef0bb957ae754c29b56375146960db09-0.
INFO 03-02 01:20:54 [logger.py:42] Received request cmpl-fbe36365f2634747b29db8a5bcc33735-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:54 [async_llm.py:261] Added request cmpl-fbe36365f2634747b29db8a5bcc33735-0.
INFO 03-02 01:20:55 [logger.py:42] Received request cmpl-53c9219570c14c8190fadbf82b267fd6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:55 [async_llm.py:261] Added request cmpl-53c9219570c14c8190fadbf82b267fd6-0.
INFO 03-02 01:20:56 [logger.py:42] Received request cmpl-ad8b16312b964d1a85d6867b7964a280-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:56 [async_llm.py:261] Added request cmpl-ad8b16312b964d1a85d6867b7964a280-0.
INFO 03-02 01:20:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:20:57 [logger.py:42] Received request cmpl-e72b860b18534a6c980fa6cc0ea6d871-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:57 [async_llm.py:261] Added request cmpl-e72b860b18534a6c980fa6cc0ea6d871-0.
INFO 03-02 01:20:58 [logger.py:42] Received request cmpl-1a53eb5e8e7649b480b6ef8ba9fe7aac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:58 [async_llm.py:261] Added request cmpl-1a53eb5e8e7649b480b6ef8ba9fe7aac-0.
INFO 03-02 01:20:59 [logger.py:42] Received request cmpl-9047be8b223448dc9b5c712a0b7bd8e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:59 [async_llm.py:261] Added request cmpl-9047be8b223448dc9b5c712a0b7bd8e4-0.
INFO 03-02 01:21:00 [logger.py:42] Received request cmpl-ce424693f2a84f9691ecfb59a7285649-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:00 [async_llm.py:261] Added request cmpl-ce424693f2a84f9691ecfb59a7285649-0.
INFO 03-02 01:21:01 [logger.py:42] Received request cmpl-3b0fb078e2c64f5da357e5f5eecb84c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:01 [async_llm.py:261] Added request cmpl-3b0fb078e2c64f5da357e5f5eecb84c0-0.
INFO 03-02 01:21:02 [logger.py:42] Received request cmpl-59c23a4c06974f90a488e65ebacd04ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:02 [async_llm.py:261] Added request cmpl-59c23a4c06974f90a488e65ebacd04ec-0.
INFO 03-02 01:21:03 [logger.py:42] Received request cmpl-1706f63070f1464cb0c990dbde48cebe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:03 [async_llm.py:261] Added request cmpl-1706f63070f1464cb0c990dbde48cebe-0.
INFO 03-02 01:21:05 [logger.py:42] Received request cmpl-e5c721c8653d42f18cb75b6c56e14d90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:05 [async_llm.py:261] Added request cmpl-e5c721c8653d42f18cb75b6c56e14d90-0.
INFO 03-02 01:21:06 [logger.py:42] Received request cmpl-304f99ada3a9495c9d193a05e6147593-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:06 [async_llm.py:261] Added request cmpl-304f99ada3a9495c9d193a05e6147593-0.
INFO 03-02 01:21:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:21:07 [logger.py:42] Received request cmpl-e633b7a203f84404b8603fe231998b54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:07 [async_llm.py:261] Added request cmpl-e633b7a203f84404b8603fe231998b54-0.
INFO 03-02 01:21:08 [logger.py:42] Received request cmpl-4cb5f81b2e2d43018f721abc0b9cad5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:08 [async_llm.py:261] Added request cmpl-4cb5f81b2e2d43018f721abc0b9cad5f-0.
INFO 03-02 01:21:09 [logger.py:42] Received request cmpl-11d8fc0073464083947128b97022661e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:09 [async_llm.py:261] Added request cmpl-11d8fc0073464083947128b97022661e-0.
INFO 03-02 01:21:10 [logger.py:42] Received request cmpl-f9aff8014545451b9f6a4b518a34fd09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:10 [async_llm.py:261] Added request cmpl-f9aff8014545451b9f6a4b518a34fd09-0.
INFO 03-02 01:21:11 [logger.py:42] Received request cmpl-db7f64ba1aec43928a51ec5444b696d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:11 [async_llm.py:261] Added request cmpl-db7f64ba1aec43928a51ec5444b696d1-0.
INFO 03-02 01:21:12 [logger.py:42] Received request cmpl-9d00039e5ce64e68a5da028a6a0df7ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:12 [async_llm.py:261] Added request cmpl-9d00039e5ce64e68a5da028a6a0df7ae-0.
INFO 03-02 01:21:13 [logger.py:42] Received request cmpl-32174306fa7f4d6d9a6f864a069b535a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:13 [async_llm.py:261] Added request cmpl-32174306fa7f4d6d9a6f864a069b535a-0.
INFO 03-02 01:21:14 [logger.py:42] Received request cmpl-513bf134e42d4d6eac3cf1304162e427-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:14 [async_llm.py:261] Added request cmpl-513bf134e42d4d6eac3cf1304162e427-0.
INFO 03-02 01:21:15 [logger.py:42] Received request cmpl-550d552b21ec499bbb648f327c3ba0e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:15 [async_llm.py:261] Added request cmpl-550d552b21ec499bbb648f327c3ba0e8-0.
INFO 03-02 01:21:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:21:16 [logger.py:42] Received request cmpl-b0bdfee324454a80a46836438a84209e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:16 [async_llm.py:261] Added request cmpl-b0bdfee324454a80a46836438a84209e-0.
INFO 03-02 01:21:18 [logger.py:42] Received request cmpl-506b9d4ae004446cb0c276404da48603-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:18 [async_llm.py:261] Added request cmpl-506b9d4ae004446cb0c276404da48603-0.
INFO 03-02 01:21:19 [logger.py:42] Received request cmpl-244f9efc882a4696a2016619274f5381-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:19 [async_llm.py:261] Added request cmpl-244f9efc882a4696a2016619274f5381-0.
INFO 03-02 01:21:20 [logger.py:42] Received request cmpl-fbb04b11fdd8485a9969e61ff8e7cc97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:20 [async_llm.py:261] Added request cmpl-fbb04b11fdd8485a9969e61ff8e7cc97-0.
INFO 03-02 01:21:21 [logger.py:42] Received request cmpl-5af80491ad4d4345aefb44afa2c21386-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:21 [async_llm.py:261] Added request cmpl-5af80491ad4d4345aefb44afa2c21386-0.
INFO 03-02 01:21:22 [logger.py:42] Received request cmpl-5ad3b2c9b6b54de9a7709001810b1034-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:22 [async_llm.py:261] Added request cmpl-5ad3b2c9b6b54de9a7709001810b1034-0.
INFO 03-02 01:21:23 [logger.py:42] Received request cmpl-8c5047afe7124d5d84db7332c7c7143c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:23 [async_llm.py:261] Added request cmpl-8c5047afe7124d5d84db7332c7c7143c-0.
INFO 03-02 01:21:24 [logger.py:42] Received request cmpl-19ccdc5b1cae4a30a78693787dcd81c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:24 [async_llm.py:261] Added request cmpl-19ccdc5b1cae4a30a78693787dcd81c5-0.
INFO 03-02 01:21:25 [logger.py:42] Received request cmpl-ee4b5c885fb843fcb08ba8c119d3d31b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:25 [async_llm.py:261] Added request cmpl-ee4b5c885fb843fcb08ba8c119d3d31b-0.
INFO 03-02 01:21:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:21:26 [logger.py:42] Received request cmpl-b77e9253afd94aba83a4c0c2a2ee61d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:26 [async_llm.py:261] Added request cmpl-b77e9253afd94aba83a4c0c2a2ee61d4-0.
INFO 03-02 01:21:27 [logger.py:42] Received request cmpl-8d80dc5bc57c4633a4bf92bfa4e2d57c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:27 [async_llm.py:261] Added request cmpl-8d80dc5bc57c4633a4bf92bfa4e2d57c-0.
INFO 03-02 01:21:28 [logger.py:42] Received request cmpl-323a0eb6e00249f099e168ea3c856448-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:28 [async_llm.py:261] Added request cmpl-323a0eb6e00249f099e168ea3c856448-0.
INFO 03-02 01:21:29 [logger.py:42] Received request cmpl-a954f0fe5a93435ea276c36cb26b05c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:29 [async_llm.py:261] Added request cmpl-a954f0fe5a93435ea276c36cb26b05c8-0.
INFO 03-02 01:21:31 [logger.py:42] Received request cmpl-e64386f9bd1c49f48508b667989ba9dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:31 [async_llm.py:261] Added request cmpl-e64386f9bd1c49f48508b667989ba9dc-0.
INFO 03-02 01:21:32 [logger.py:42] Received request cmpl-cc4952b364d942fcbfe5d7ffbb5ac736-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:32 [async_llm.py:261] Added request cmpl-cc4952b364d942fcbfe5d7ffbb5ac736-0.
INFO 03-02 01:21:33 [logger.py:42] Received request cmpl-c78e710c679a44d78b9a87623e6ece7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:33 [async_llm.py:261] Added request cmpl-c78e710c679a44d78b9a87623e6ece7c-0.
INFO 03-02 01:21:34 [logger.py:42] Received request cmpl-42e4e6fb9574477bbfcba28c3362f14b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:34 [async_llm.py:261] Added request cmpl-42e4e6fb9574477bbfcba28c3362f14b-0.
INFO 03-02 01:21:35 [logger.py:42] Received request cmpl-6958408bf9b8460cbddd3320042f3a8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:35 [async_llm.py:261] Added request cmpl-6958408bf9b8460cbddd3320042f3a8c-0.
INFO 03-02 01:21:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:21:36 [logger.py:42] Received request cmpl-4d448ed4304e4304b4712958aae06f7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:36 [async_llm.py:261] Added request cmpl-4d448ed4304e4304b4712958aae06f7d-0.
INFO 03-02 01:21:37 [logger.py:42] Received request cmpl-8611dcf82b394352bb51d42b134d163d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:37 [async_llm.py:261] Added request cmpl-8611dcf82b394352bb51d42b134d163d-0.
INFO 03-02 01:21:38 [logger.py:42] Received request cmpl-2619587f15c840109037510227277f5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:38 [async_llm.py:261] Added request cmpl-2619587f15c840109037510227277f5a-0.
INFO 03-02 01:21:39 [logger.py:42] Received request cmpl-f923501427d4431f98f47e0fc1c53ed7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:39 [async_llm.py:261] Added request cmpl-f923501427d4431f98f47e0fc1c53ed7-0.
INFO 03-02 01:21:40 [logger.py:42] Received request cmpl-7a72785772f4468dbbb322c5b2b36467-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:40 [async_llm.py:261] Added request cmpl-7a72785772f4468dbbb322c5b2b36467-0.
INFO 03-02 01:21:41 [logger.py:42] Received request cmpl-4d409ad6e8b54e259ada9301c3217270-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:41 [async_llm.py:261] Added request cmpl-4d409ad6e8b54e259ada9301c3217270-0.
INFO 03-02 01:21:42 [logger.py:42] Received request cmpl-cbfa2a75bf264f2584ad4541e5e79b10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:43 [async_llm.py:261] Added request cmpl-cbfa2a75bf264f2584ad4541e5e79b10-0.
INFO 03-02 01:21:44 [logger.py:42] Received request cmpl-9598a0a4fb924a4eb72f80502b9a7588-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:44 [async_llm.py:261] Added request cmpl-9598a0a4fb924a4eb72f80502b9a7588-0.
INFO 03-02 01:21:45 [logger.py:42] Received request cmpl-b52cc3095f064691b998bc56b37f1758-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:45 [async_llm.py:261] Added request cmpl-b52cc3095f064691b998bc56b37f1758-0.
INFO 03-02 01:21:46 [logger.py:42] Received request cmpl-ef7936748ad24ec38d37e19905123153-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:46 [async_llm.py:261] Added request cmpl-ef7936748ad24ec38d37e19905123153-0.
INFO 03-02 01:21:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:21:47 [logger.py:42] Received request cmpl-401328d4517144bfb1918243351ccc8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:47 [async_llm.py:261] Added request cmpl-401328d4517144bfb1918243351ccc8e-0.
INFO 03-02 01:21:48 [logger.py:42] Received request cmpl-e188cbcbce31452f86504f92cd7d8eb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:48 [async_llm.py:261] Added request cmpl-e188cbcbce31452f86504f92cd7d8eb7-0.
INFO 03-02 01:21:49 [logger.py:42] Received request cmpl-b919efd631ed4032a7cceafe1217803b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:49 [async_llm.py:261] Added request cmpl-b919efd631ed4032a7cceafe1217803b-0.
INFO 03-02 01:21:50 [logger.py:42] Received request cmpl-1bfbc1ce2951486f8de7b066fd918b72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:50 [async_llm.py:261] Added request cmpl-1bfbc1ce2951486f8de7b066fd918b72-0.
INFO 03-02 01:21:51 [logger.py:42] Received request cmpl-bd7db82cdb7b476ba03fe4e79db78403-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:51 [async_llm.py:261] Added request cmpl-bd7db82cdb7b476ba03fe4e79db78403-0.
INFO 03-02 01:21:52 [logger.py:42] Received request cmpl-d12d76de0dc845d3931055da3b79841a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:52 [async_llm.py:261] Added request cmpl-d12d76de0dc845d3931055da3b79841a-0.
INFO 03-02 01:21:53 [logger.py:42] Received request cmpl-2d7252c4240c428983af9ee4fed6ef27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:53 [async_llm.py:261] Added request cmpl-2d7252c4240c428983af9ee4fed6ef27-0.
INFO 03-02 01:21:54 [logger.py:42] Received request cmpl-bfebdec31d4e466d98a71e835c4b8a07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:54 [async_llm.py:261] Added request cmpl-bfebdec31d4e466d98a71e835c4b8a07-0.
INFO 03-02 01:21:56 [logger.py:42] Received request cmpl-657f0077358644fcbfface43b271f995-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:56 [async_llm.py:261] Added request cmpl-657f0077358644fcbfface43b271f995-0.
INFO 03-02 01:21:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:21:57 [logger.py:42] Received request cmpl-caf05b15293f4348be160b9280dc7a12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:57 [async_llm.py:261] Added request cmpl-caf05b15293f4348be160b9280dc7a12-0.
INFO 03-02 01:21:58 [logger.py:42] Received request cmpl-d0ffe9a695d94d3d988fd7f111cce139-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:58 [async_llm.py:261] Added request cmpl-d0ffe9a695d94d3d988fd7f111cce139-0.
INFO 03-02 01:21:59 [logger.py:42] Received request cmpl-705c6a6a4eae45839ed2ba7934a123c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:59 [async_llm.py:261] Added request cmpl-705c6a6a4eae45839ed2ba7934a123c4-0.
INFO 03-02 01:22:00 [logger.py:42] Received request cmpl-e6c2c2a4bef64112aae88736cb9a63d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:00 [async_llm.py:261] Added request cmpl-e6c2c2a4bef64112aae88736cb9a63d8-0.
INFO 03-02 01:22:01 [logger.py:42] Received request cmpl-a9c857fbf6d04f53b0ee53ad7eb1c4da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:01 [async_llm.py:261] Added request cmpl-a9c857fbf6d04f53b0ee53ad7eb1c4da-0.
INFO 03-02 01:22:02 [logger.py:42] Received request cmpl-f687ef1ec5e14bed99bd7925d6415b91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:02 [async_llm.py:261] Added request cmpl-f687ef1ec5e14bed99bd7925d6415b91-0.
INFO 03-02 01:22:03 [logger.py:42] Received request cmpl-d597453327e046f695d52d7a2b105c41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:03 [async_llm.py:261] Added request cmpl-d597453327e046f695d52d7a2b105c41-0.
INFO 03-02 01:22:04 [logger.py:42] Received request cmpl-c03699538a1b44848ac31ddfeac77cdc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:04 [async_llm.py:261] Added request cmpl-c03699538a1b44848ac31ddfeac77cdc-0.
INFO 03-02 01:22:05 [logger.py:42] Received request cmpl-5c7b09d7f884490a9ceda9ec0076482f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:05 [async_llm.py:261] Added request cmpl-5c7b09d7f884490a9ceda9ec0076482f-0.
INFO 03-02 01:22:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:22:06 [logger.py:42] Received request cmpl-96c4642c73874cd68d8afd3cef4990ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:06 [async_llm.py:261] Added request cmpl-96c4642c73874cd68d8afd3cef4990ea-0.
INFO 03-02 01:22:07 [logger.py:42] Received request cmpl-3692d6a6eab84d69b675f4029d104cd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:07 [async_llm.py:261] Added request cmpl-3692d6a6eab84d69b675f4029d104cd1-0.
INFO 03-02 01:22:09 [logger.py:42] Received request cmpl-e7eba2d9ef9b457c8f3ef193e150e37f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:09 [async_llm.py:261] Added request cmpl-e7eba2d9ef9b457c8f3ef193e150e37f-0.
INFO 03-02 01:22:10 [logger.py:42] Received request cmpl-3a04945bb60340ea87c71113e2a90689-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:10 [async_llm.py:261] Added request cmpl-3a04945bb60340ea87c71113e2a90689-0.
INFO 03-02 01:22:11 [logger.py:42] Received request cmpl-1a76a6cce2a54e9d867ce82a666c2bcf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:11 [async_llm.py:261] Added request cmpl-1a76a6cce2a54e9d867ce82a666c2bcf-0.
INFO 03-02 01:22:12 [logger.py:42] Received request cmpl-2a0be96196524bb2a49b9bbcb07e7669-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:12 [async_llm.py:261] Added request cmpl-2a0be96196524bb2a49b9bbcb07e7669-0.
INFO 03-02 01:22:13 [logger.py:42] Received request cmpl-5afee0d87c8948a086e01bb5b6cf6354-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:13 [async_llm.py:261] Added request cmpl-5afee0d87c8948a086e01bb5b6cf6354-0.
INFO 03-02 01:22:14 [logger.py:42] Received request cmpl-987e8d910c5e4f969ffacc6ea8846652-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:14 [async_llm.py:261] Added request cmpl-987e8d910c5e4f969ffacc6ea8846652-0.
INFO 03-02 01:22:15 [logger.py:42] Received request cmpl-cba17d4c23c7456e90f864c24e45b882-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:15 [async_llm.py:261] Added request cmpl-cba17d4c23c7456e90f864c24e45b882-0.
INFO 03-02 01:22:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:22:16 [logger.py:42] Received request cmpl-17f1bc3a5131404ca93fa679b7b4773a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:16 [async_llm.py:261] Added request cmpl-17f1bc3a5131404ca93fa679b7b4773a-0.
INFO 03-02 01:22:17 [logger.py:42] Received request cmpl-ba3ebb988c1d4af0951407b9958612b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:17 [async_llm.py:261] Added request cmpl-ba3ebb988c1d4af0951407b9958612b1-0.
INFO 03-02 01:22:18 [logger.py:42] Received request cmpl-7a28878cd5b54cb49dadc8008e73a08e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:18 [async_llm.py:261] Added request cmpl-7a28878cd5b54cb49dadc8008e73a08e-0.
INFO 03-02 01:22:19 [logger.py:42] Received request cmpl-af8bda1cdfb34214b18f97c387ebf13d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:19 [async_llm.py:261] Added request cmpl-af8bda1cdfb34214b18f97c387ebf13d-0.
INFO 03-02 01:22:20 [logger.py:42] Received request cmpl-df7ce508d0384579b13535f9692b7dfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:20 [async_llm.py:261] Added request cmpl-df7ce508d0384579b13535f9692b7dfa-0.
INFO 03-02 01:22:22 [logger.py:42] Received request cmpl-9fd4817fce3a44369f2870ee01f072d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:22 [async_llm.py:261] Added request cmpl-9fd4817fce3a44369f2870ee01f072d2-0.
INFO 03-02 01:22:23 [logger.py:42] Received request cmpl-6bf2b305a2064f1981eccf2048e5dc5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:23 [async_llm.py:261] Added request cmpl-6bf2b305a2064f1981eccf2048e5dc5d-0.
INFO 03-02 01:22:24 [logger.py:42] Received request cmpl-92cb3a1d5ad54572bdbe6bb7939c5940-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:24 [async_llm.py:261] Added request cmpl-92cb3a1d5ad54572bdbe6bb7939c5940-0.
INFO 03-02 01:22:25 [logger.py:42] Received request cmpl-2fc902bc64ce4f2e825e8ffe65ff9287-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:25 [async_llm.py:261] Added request cmpl-2fc902bc64ce4f2e825e8ffe65ff9287-0.
INFO 03-02 01:22:26 [logger.py:42] Received request cmpl-1bd75e1b174446ae9db677bb8363029f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:26 [async_llm.py:261] Added request cmpl-1bd75e1b174446ae9db677bb8363029f-0.
INFO 03-02 01:22:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:22:27 [logger.py:42] Received request cmpl-e1f0d30f51244d16995db71b58465fda-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:27 [async_llm.py:261] Added request cmpl-e1f0d30f51244d16995db71b58465fda-0.
INFO 03-02 01:22:28 [logger.py:42] Received request cmpl-43193fe6ceef48ed9b63a8156b7c8139-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:28 [async_llm.py:261] Added request cmpl-43193fe6ceef48ed9b63a8156b7c8139-0.
INFO 03-02 01:22:29 [logger.py:42] Received request cmpl-0e98b287e7b743e8a151740b7bb501f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:29 [async_llm.py:261] Added request cmpl-0e98b287e7b743e8a151740b7bb501f8-0.
INFO 03-02 01:22:30 [logger.py:42] Received request cmpl-bcde2f5e177745b999558fa4b13da28b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:30 [async_llm.py:261] Added request cmpl-bcde2f5e177745b999558fa4b13da28b-0.
INFO 03-02 01:22:31 [logger.py:42] Received request cmpl-4359dfeec7ea4a539757cbebf06c7084-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:31 [async_llm.py:261] Added request cmpl-4359dfeec7ea4a539757cbebf06c7084-0.
INFO 03-02 01:22:32 [logger.py:42] Received request cmpl-0c845ee784ee48f99a2956ed6d458d40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:32 [async_llm.py:261] Added request cmpl-0c845ee784ee48f99a2956ed6d458d40-0.
INFO 03-02 01:22:33 [logger.py:42] Received request cmpl-1036c479838743918287192fa586d9c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:33 [async_llm.py:261] Added request cmpl-1036c479838743918287192fa586d9c8-0.
INFO 03-02 01:22:35 [logger.py:42] Received request cmpl-724ba4056e8f4a87a9d62bffe5a41d96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:35 [async_llm.py:261] Added request cmpl-724ba4056e8f4a87a9d62bffe5a41d96-0.
INFO 03-02 01:22:36 [logger.py:42] Received request cmpl-7eb78590c919494aa9a402ae34149350-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:36 [async_llm.py:261] Added request cmpl-7eb78590c919494aa9a402ae34149350-0.
INFO 03-02 01:22:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:22:37 [logger.py:42] Received request cmpl-bade7a8c9edd4b1b877c86969c994f1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:37 [async_llm.py:261] Added request cmpl-bade7a8c9edd4b1b877c86969c994f1a-0.
INFO 03-02 01:22:38 [logger.py:42] Received request cmpl-cfe0cfd220724e1785d32a33d3bb2de4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:38 [async_llm.py:261] Added request cmpl-cfe0cfd220724e1785d32a33d3bb2de4-0.
INFO 03-02 01:22:39 [logger.py:42] Received request cmpl-8d8e61eeaca84298a560e1fa107ecc69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:39 [async_llm.py:261] Added request cmpl-8d8e61eeaca84298a560e1fa107ecc69-0.
INFO 03-02 01:22:40 [logger.py:42] Received request cmpl-977298ee25034432a48e2ac4ab801cae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:40 [async_llm.py:261] Added request cmpl-977298ee25034432a48e2ac4ab801cae-0.
INFO 03-02 01:22:41 [logger.py:42] Received request cmpl-208aa4b7e25542dbaac389fb7ada1d61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:41 [async_llm.py:261] Added request cmpl-208aa4b7e25542dbaac389fb7ada1d61-0.
INFO 03-02 01:22:42 [logger.py:42] Received request cmpl-d643eaddf7964947a795f864913ef363-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:42 [async_llm.py:261] Added request cmpl-d643eaddf7964947a795f864913ef363-0.
INFO 03-02 01:22:43 [logger.py:42] Received request cmpl-06d31f821b364e80afe21234e0a3ea9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:43 [async_llm.py:261] Added request cmpl-06d31f821b364e80afe21234e0a3ea9d-0.
INFO 03-02 01:22:44 [logger.py:42] Received request cmpl-208ace7358794dc5a1662edecadce835-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:44 [async_llm.py:261] Added request cmpl-208ace7358794dc5a1662edecadce835-0.
INFO 03-02 01:22:45 [logger.py:42] Received request cmpl-3f5c884d6bf34fab8094a13742e37ce2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:45 [async_llm.py:261] Added request cmpl-3f5c884d6bf34fab8094a13742e37ce2-0.
INFO 03-02 01:22:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:22:46 [logger.py:42] Received request cmpl-4693cff68a5842dd86aa9d08caa61a60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:46 [async_llm.py:261] Added request cmpl-4693cff68a5842dd86aa9d08caa61a60-0.
INFO 03-02 01:22:48 [logger.py:42] Received request cmpl-90a12710b0f94f4dba797b6c30e2b15e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:48 [async_llm.py:261] Added request cmpl-90a12710b0f94f4dba797b6c30e2b15e-0.
INFO 03-02 01:22:49 [logger.py:42] Received request cmpl-b5b08a0a1b0e42fda21d458b6adeb05d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:49 [async_llm.py:261] Added request cmpl-b5b08a0a1b0e42fda21d458b6adeb05d-0.
INFO 03-02 01:22:50 [logger.py:42] Received request cmpl-7ab3611efe854b8983ecf47c12d83a60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:50 [async_llm.py:261] Added request cmpl-7ab3611efe854b8983ecf47c12d83a60-0.
INFO 03-02 01:22:51 [logger.py:42] Received request cmpl-b6d9b641d2504fcab453e65f28d6e9bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:51 [async_llm.py:261] Added request cmpl-b6d9b641d2504fcab453e65f28d6e9bf-0.
INFO 03-02 01:22:52 [logger.py:42] Received request cmpl-4406f9dc479f4297a10da3a676729ec1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:52 [async_llm.py:261] Added request cmpl-4406f9dc479f4297a10da3a676729ec1-0.
INFO 03-02 01:22:53 [logger.py:42] Received request cmpl-b83dfb566cff41928a4b8bebce9c8d7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:53 [async_llm.py:261] Added request cmpl-b83dfb566cff41928a4b8bebce9c8d7a-0.
INFO 03-02 01:22:54 [logger.py:42] Received request cmpl-c048cb9cb10240bb94025de6b5288e21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:54 [async_llm.py:261] Added request cmpl-c048cb9cb10240bb94025de6b5288e21-0.
INFO 03-02 01:22:55 [logger.py:42] Received request cmpl-f37c36fbd8734964a2ec7efefe7061da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:55 [async_llm.py:261] Added request cmpl-f37c36fbd8734964a2ec7efefe7061da-0.
INFO 03-02 01:22:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:22:56 [logger.py:42] Received request cmpl-63f75663232445ffb94a4fc49f533f88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:56 [async_llm.py:261] Added request cmpl-63f75663232445ffb94a4fc49f533f88-0.
INFO 03-02 01:22:57 [logger.py:42] Received request cmpl-b0d4b45647284a66aba567fabd00c3e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:57 [async_llm.py:261] Added request cmpl-b0d4b45647284a66aba567fabd00c3e7-0.
INFO 03-02 01:22:58 [logger.py:42] Received request cmpl-4c3d6a1515d44d7098582929bd59860e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:58 [async_llm.py:261] Added request cmpl-4c3d6a1515d44d7098582929bd59860e-0.
INFO 03-02 01:22:59 [logger.py:42] Received request cmpl-8a50b8ad45274dea9b298212ca878ade-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:59 [async_llm.py:261] Added request cmpl-8a50b8ad45274dea9b298212ca878ade-0.
INFO 03-02 01:23:01 [logger.py:42] Received request cmpl-775a85c9dadf459fbed763679facc57a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:01 [async_llm.py:261] Added request cmpl-775a85c9dadf459fbed763679facc57a-0.
INFO 03-02 01:23:02 [logger.py:42] Received request cmpl-8cf4dcc6e88a498ab67b1aea3d044789-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:02 [async_llm.py:261] Added request cmpl-8cf4dcc6e88a498ab67b1aea3d044789-0.
INFO 03-02 01:23:03 [logger.py:42] Received request cmpl-4ffab3338d0f4689988fbd84a26f63cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:03 [async_llm.py:261] Added request cmpl-4ffab3338d0f4689988fbd84a26f63cf-0.
INFO 03-02 01:23:04 [logger.py:42] Received request cmpl-0575c3912ab64c78aa0fec0dbe51eaa3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:04 [async_llm.py:261] Added request cmpl-0575c3912ab64c78aa0fec0dbe51eaa3-0.
INFO 03-02 01:23:05 [logger.py:42] Received request cmpl-a69bd5c09b9640939db120b136ed1156-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:05 [async_llm.py:261] Added request cmpl-a69bd5c09b9640939db120b136ed1156-0.
INFO 03-02 01:23:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:23:06 [logger.py:42] Received request cmpl-0f07fc55125049278dd61c9112809ea9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:06 [async_llm.py:261] Added request cmpl-0f07fc55125049278dd61c9112809ea9-0.
INFO 03-02 01:23:07 [logger.py:42] Received request cmpl-e09835b89d934933a5c9ec74670473e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:07 [async_llm.py:261] Added request cmpl-e09835b89d934933a5c9ec74670473e7-0.
INFO 03-02 01:23:08 [logger.py:42] Received request cmpl-373ed1e833a74c03bcc050ee02b1d42a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:08 [async_llm.py:261] Added request cmpl-373ed1e833a74c03bcc050ee02b1d42a-0.
INFO 03-02 01:23:09 [logger.py:42] Received request cmpl-fa54ca0a5ca44d2ea57f3be89fa88460-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:09 [async_llm.py:261] Added request cmpl-fa54ca0a5ca44d2ea57f3be89fa88460-0.
INFO 03-02 01:23:10 [logger.py:42] Received request cmpl-b06c3edbd2c742eebc44b7a6235e1b07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:10 [async_llm.py:261] Added request cmpl-b06c3edbd2c742eebc44b7a6235e1b07-0.
INFO 03-02 01:23:11 [logger.py:42] Received request cmpl-684fce6d910d4933b80595cf4a5986b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:11 [async_llm.py:261] Added request cmpl-684fce6d910d4933b80595cf4a5986b8-0.
INFO 03-02 01:23:12 [logger.py:42] Received request cmpl-1ff6d257daaa46a2a96531b514c1b809-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:12 [async_llm.py:261] Added request cmpl-1ff6d257daaa46a2a96531b514c1b809-0.
INFO 03-02 01:23:14 [logger.py:42] Received request cmpl-67f00e78784e4cb2a5a7016f8be0411d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:14 [async_llm.py:261] Added request cmpl-67f00e78784e4cb2a5a7016f8be0411d-0.
INFO 03-02 01:23:15 [logger.py:42] Received request cmpl-a99d00ee591743c9be819ec1ea098cff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:15 [async_llm.py:261] Added request cmpl-a99d00ee591743c9be819ec1ea098cff-0.
INFO 03-02 01:23:16 [logger.py:42] Received request cmpl-aa0aeb10569e4f97b9c3c6f96c205cc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:16 [async_llm.py:261] Added request cmpl-aa0aeb10569e4f97b9c3c6f96c205cc3-0.
INFO 03-02 01:23:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:23:17 [logger.py:42] Received request cmpl-e265d0e4d2f943c29c3057c056248661-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:17 [async_llm.py:261] Added request cmpl-e265d0e4d2f943c29c3057c056248661-0.
INFO 03-02 01:23:18 [logger.py:42] Received request cmpl-9c2e5f31089d4a31864d879dd9622a29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:18 [async_llm.py:261] Added request cmpl-9c2e5f31089d4a31864d879dd9622a29-0.
INFO 03-02 01:23:19 [logger.py:42] Received request cmpl-17eb722cd3b6469b8fcd1630a7dbb0f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:19 [async_llm.py:261] Added request cmpl-17eb722cd3b6469b8fcd1630a7dbb0f1-0.
INFO 03-02 01:23:20 [logger.py:42] Received request cmpl-ab8c3e45a3b44d17b922c52839462df1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:20 [async_llm.py:261] Added request cmpl-ab8c3e45a3b44d17b922c52839462df1-0.
INFO 03-02 01:23:21 [logger.py:42] Received request cmpl-f79e4119c29a4e709304956f90e745dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:21 [async_llm.py:261] Added request cmpl-f79e4119c29a4e709304956f90e745dc-0.
INFO 03-02 01:23:22 [logger.py:42] Received request cmpl-55fd1946c2154fe9a1baadee93f8b17d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:22 [async_llm.py:261] Added request cmpl-55fd1946c2154fe9a1baadee93f8b17d-0.
INFO 03-02 01:23:23 [logger.py:42] Received request cmpl-90e94dd58cd04b669745fcde481405b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:23 [async_llm.py:261] Added request cmpl-90e94dd58cd04b669745fcde481405b8-0.
INFO 03-02 01:23:24 [logger.py:42] Received request cmpl-765d517eb85e4c1bb8a4ed1873f1c864-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:24 [async_llm.py:261] Added request cmpl-765d517eb85e4c1bb8a4ed1873f1c864-0.
INFO 03-02 01:23:26 [logger.py:42] Received request cmpl-978ec67641b54e0f814002ab9d782d7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:26 [async_llm.py:261] Added request cmpl-978ec67641b54e0f814002ab9d782d7f-0.
INFO 03-02 01:23:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:23:27 [logger.py:42] Received request cmpl-1a2a96cb2a934e54a9c8d1820eb5684a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:27 [async_llm.py:261] Added request cmpl-1a2a96cb2a934e54a9c8d1820eb5684a-0.
INFO 03-02 01:23:28 [logger.py:42] Received request cmpl-d6bca1ad9caa47118017e65d3c4fe4dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:28 [async_llm.py:261] Added request cmpl-d6bca1ad9caa47118017e65d3c4fe4dd-0.
INFO 03-02 01:23:29 [logger.py:42] Received request cmpl-03c626989df942559f92e9ad3d2503f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:29 [async_llm.py:261] Added request cmpl-03c626989df942559f92e9ad3d2503f0-0.
INFO 03-02 01:23:30 [logger.py:42] Received request cmpl-a303988a865847a0a17024cbd17402ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:30 [async_llm.py:261] Added request cmpl-a303988a865847a0a17024cbd17402ec-0.
INFO 03-02 01:23:31 [logger.py:42] Received request cmpl-46f2e351c27645d78b02e5f66b39370e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:31 [async_llm.py:261] Added request cmpl-46f2e351c27645d78b02e5f66b39370e-0.
INFO 03-02 01:23:32 [logger.py:42] Received request cmpl-2c89d86869664fe98311a9f9ca10ff17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:32 [async_llm.py:261] Added request cmpl-2c89d86869664fe98311a9f9ca10ff17-0.
INFO 03-02 01:23:33 [logger.py:42] Received request cmpl-0a08090601f54a459a704f7d2a503e70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:33 [async_llm.py:261] Added request cmpl-0a08090601f54a459a704f7d2a503e70-0.
INFO 03-02 01:23:34 [logger.py:42] Received request cmpl-e2cbc214687b4aa088249e5058d0aef6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:34 [async_llm.py:261] Added request cmpl-e2cbc214687b4aa088249e5058d0aef6-0.
INFO 03-02 01:23:35 [logger.py:42] Received request cmpl-f9f9b6cef832416ea09ec53ccbc91aba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:35 [async_llm.py:261] Added request cmpl-f9f9b6cef832416ea09ec53ccbc91aba-0.
INFO 03-02 01:23:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:23:36 [logger.py:42] Received request cmpl-031ab324b6fc46f8946d65a8520c00e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:36 [async_llm.py:261] Added request cmpl-031ab324b6fc46f8946d65a8520c00e8-0.
INFO 03-02 01:23:37 [logger.py:42] Received request cmpl-33c74859795c4620ac8e3a5bf0eaedd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:37 [async_llm.py:261] Added request cmpl-33c74859795c4620ac8e3a5bf0eaedd1-0.
INFO 03-02 01:23:39 [logger.py:42] Received request cmpl-74ca78b5994a4a35ae40bbbda23af82a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:39 [async_llm.py:261] Added request cmpl-74ca78b5994a4a35ae40bbbda23af82a-0.
INFO 03-02 01:23:40 [logger.py:42] Received request cmpl-08831cba5de64196a8aa1dc720d7a7bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:40 [async_llm.py:261] Added request cmpl-08831cba5de64196a8aa1dc720d7a7bb-0.
INFO 03-02 01:23:41 [logger.py:42] Received request cmpl-4ca72fd9f59f42fb878626eb13e1d1b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:41 [async_llm.py:261] Added request cmpl-4ca72fd9f59f42fb878626eb13e1d1b4-0.
INFO 03-02 01:23:42 [logger.py:42] Received request cmpl-ec1e70a3ca8d44fd852c77651c79856d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:42 [async_llm.py:261] Added request cmpl-ec1e70a3ca8d44fd852c77651c79856d-0.
INFO 03-02 01:23:43 [logger.py:42] Received request cmpl-d22edc7313f248218564b4a474c3aa14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:43 [async_llm.py:261] Added request cmpl-d22edc7313f248218564b4a474c3aa14-0.
INFO 03-02 01:23:44 [logger.py:42] Received request cmpl-14da086bf5b846a498f53e54a9083beb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:44 [async_llm.py:261] Added request cmpl-14da086bf5b846a498f53e54a9083beb-0.
INFO 03-02 01:23:45 [logger.py:42] Received request cmpl-ebdbce30f689413c90a8c2f2e907f023-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:45 [async_llm.py:261] Added request cmpl-ebdbce30f689413c90a8c2f2e907f023-0.
INFO 03-02 01:23:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:23:46 [logger.py:42] Received request cmpl-b338655a53ae4e9c8efc291300a5b5da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:46 [async_llm.py:261] Added request cmpl-b338655a53ae4e9c8efc291300a5b5da-0.
INFO 03-02 01:23:47 [logger.py:42] Received request cmpl-ddf3067d4b134dc8b4907453cd6faa73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:47 [async_llm.py:261] Added request cmpl-ddf3067d4b134dc8b4907453cd6faa73-0.
INFO 03-02 01:23:48 [logger.py:42] Received request cmpl-5d9798eac13945bdad5ba2012637852d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:48 [async_llm.py:261] Added request cmpl-5d9798eac13945bdad5ba2012637852d-0.
INFO 03-02 01:23:49 [logger.py:42] Received request cmpl-e063f9a9c0a543cb9f3630fed77ddd13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:49 [async_llm.py:261] Added request cmpl-e063f9a9c0a543cb9f3630fed77ddd13-0.
INFO 03-02 01:23:50 [logger.py:42] Received request cmpl-5d3f4997d45d4a9099ac542db863396a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:50 [async_llm.py:261] Added request cmpl-5d3f4997d45d4a9099ac542db863396a-0.
INFO 03-02 01:23:52 [logger.py:42] Received request cmpl-3a48cf820abc42f3a2f65bb103d8cd8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:52 [async_llm.py:261] Added request cmpl-3a48cf820abc42f3a2f65bb103d8cd8d-0.
INFO 03-02 01:23:53 [logger.py:42] Received request cmpl-9fabbc26ef6e44f49388606151beaeec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:53 [async_llm.py:261] Added request cmpl-9fabbc26ef6e44f49388606151beaeec-0.
INFO 03-02 01:23:54 [logger.py:42] Received request cmpl-72b5a2896b2a4b59a0364b330f0a0c28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:54 [async_llm.py:261] Added request cmpl-72b5a2896b2a4b59a0364b330f0a0c28-0.
INFO 03-02 01:23:55 [logger.py:42] Received request cmpl-a3aa25d9dc424ed99852dbf8fa990410-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:55 [async_llm.py:261] Added request cmpl-a3aa25d9dc424ed99852dbf8fa990410-0.
INFO 03-02 01:23:56 [logger.py:42] Received request cmpl-62fcc9234ff44fc982a4fbc9a4c17e6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:56 [async_llm.py:261] Added request cmpl-62fcc9234ff44fc982a4fbc9a4c17e6e-0.
INFO 03-02 01:23:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:23:57 [logger.py:42] Received request cmpl-cee35626ea18428d8502efd9d1df5283-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:57 [async_llm.py:261] Added request cmpl-cee35626ea18428d8502efd9d1df5283-0.
INFO 03-02 01:23:58 [logger.py:42] Received request cmpl-37a0cc51f7214961b35d07efce706cc2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:58 [async_llm.py:261] Added request cmpl-37a0cc51f7214961b35d07efce706cc2-0.
INFO 03-02 01:23:59 [logger.py:42] Received request cmpl-40a5a7d63e9a4e4fa97037fa14ed5ec2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:59 [async_llm.py:261] Added request cmpl-40a5a7d63e9a4e4fa97037fa14ed5ec2-0.
INFO 03-02 01:24:00 [logger.py:42] Received request cmpl-b5bf7a69442b47838dfe082f4c5e615d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:00 [async_llm.py:261] Added request cmpl-b5bf7a69442b47838dfe082f4c5e615d-0.
INFO 03-02 01:24:01 [logger.py:42] Received request cmpl-99aca6dd3aeb435aaf1c3ab4b9766406-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:01 [async_llm.py:261] Added request cmpl-99aca6dd3aeb435aaf1c3ab4b9766406-0.
INFO 03-02 01:24:02 [logger.py:42] Received request cmpl-8c55ddf6358049a0b001ed55636a452f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:02 [async_llm.py:261] Added request cmpl-8c55ddf6358049a0b001ed55636a452f-0.
INFO 03-02 01:24:03 [logger.py:42] Received request cmpl-07dfe7e1500449659eeb89f11324781b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:03 [async_llm.py:261] Added request cmpl-07dfe7e1500449659eeb89f11324781b-0.
INFO 03-02 01:24:05 [logger.py:42] Received request cmpl-3a7d20c7a8f84984a5723d494ae86266-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:05 [async_llm.py:261] Added request cmpl-3a7d20c7a8f84984a5723d494ae86266-0.
INFO 03-02 01:24:06 [logger.py:42] Received request cmpl-5973e950e48b43149353f8d2d4fa05f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:06 [async_llm.py:261] Added request cmpl-5973e950e48b43149353f8d2d4fa05f6-0.
INFO 03-02 01:24:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:24:07 [logger.py:42] Received request cmpl-4e63b7cb01ad4f59bda0c3bec840ad8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:07 [async_llm.py:261] Added request cmpl-4e63b7cb01ad4f59bda0c3bec840ad8a-0.
INFO 03-02 01:24:08 [logger.py:42] Received request cmpl-c785875e481d4c7faf5865b8ebb1508a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:08 [async_llm.py:261] Added request cmpl-c785875e481d4c7faf5865b8ebb1508a-0.
INFO 03-02 01:24:09 [logger.py:42] Received request cmpl-a5dd8e0c689a4a2796f5840f7ead21d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:09 [async_llm.py:261] Added request cmpl-a5dd8e0c689a4a2796f5840f7ead21d1-0.
INFO 03-02 01:24:10 [logger.py:42] Received request cmpl-3cb4f2bf01a14ce8b1073dd0cacf52b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:10 [async_llm.py:261] Added request cmpl-3cb4f2bf01a14ce8b1073dd0cacf52b4-0.
INFO 03-02 01:24:11 [logger.py:42] Received request cmpl-197ea1c4968a4285bdfaee4e97c02bea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:11 [async_llm.py:261] Added request cmpl-197ea1c4968a4285bdfaee4e97c02bea-0.
INFO 03-02 01:24:12 [logger.py:42] Received request cmpl-33dff908934049a393e4f84b93428851-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:12 [async_llm.py:261] Added request cmpl-33dff908934049a393e4f84b93428851-0.
INFO 03-02 01:24:13 [logger.py:42] Received request cmpl-242a9951d056497ca0ea99f5d1d56cfd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:13 [async_llm.py:261] Added request cmpl-242a9951d056497ca0ea99f5d1d56cfd-0.
INFO 03-02 01:24:14 [logger.py:42] Received request cmpl-d66ca68d5c5b40c9961f53449fddee52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:14 [async_llm.py:261] Added request cmpl-d66ca68d5c5b40c9961f53449fddee52-0.
INFO 03-02 01:24:15 [logger.py:42] Received request cmpl-73e45de0634b4d7c8a708d7241d046a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:15 [async_llm.py:261] Added request cmpl-73e45de0634b4d7c8a708d7241d046a3-0.
INFO 03-02 01:24:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:24:16 [logger.py:42] Received request cmpl-0aa168419e8445b3896a7ebe757328a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:16 [async_llm.py:261] Added request cmpl-0aa168419e8445b3896a7ebe757328a8-0.
INFO 03-02 01:24:18 [logger.py:42] Received request cmpl-6e1c924543f04377aa3f7147077dd17a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:18 [async_llm.py:261] Added request cmpl-6e1c924543f04377aa3f7147077dd17a-0.
INFO 03-02 01:24:19 [logger.py:42] Received request cmpl-80c1ed8427e748f6ad411f6a8cb7947c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:19 [async_llm.py:261] Added request cmpl-80c1ed8427e748f6ad411f6a8cb7947c-0.
INFO 03-02 01:24:20 [logger.py:42] Received request cmpl-b2031a61724a4cf79206818090157fa3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:20 [async_llm.py:261] Added request cmpl-b2031a61724a4cf79206818090157fa3-0.
INFO 03-02 01:24:21 [logger.py:42] Received request cmpl-04bf29c77f7147e59819c74646065375-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:21 [async_llm.py:261] Added request cmpl-04bf29c77f7147e59819c74646065375-0.
INFO 03-02 01:24:22 [logger.py:42] Received request cmpl-ade1ed6eea444ddda2ad03d939a98a78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:22 [async_llm.py:261] Added request cmpl-ade1ed6eea444ddda2ad03d939a98a78-0.
INFO 03-02 01:24:23 [logger.py:42] Received request cmpl-25fe26d4729646e99e20eb067775a5ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:23 [async_llm.py:261] Added request cmpl-25fe26d4729646e99e20eb067775a5ff-0.
INFO 03-02 01:24:24 [logger.py:42] Received request cmpl-bd58db9877c8440c9464603414e7ffa7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:24 [async_llm.py:261] Added request cmpl-bd58db9877c8440c9464603414e7ffa7-0.
INFO 03-02 01:24:25 [logger.py:42] Received request cmpl-5c7a57974f8145bc919a9149e293d376-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:25 [async_llm.py:261] Added request cmpl-5c7a57974f8145bc919a9149e293d376-0.
INFO 03-02 01:24:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:24:26 [logger.py:42] Received request cmpl-533ac90c06124b2c868863b151136496-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:26 [async_llm.py:261] Added request cmpl-533ac90c06124b2c868863b151136496-0.
INFO 03-02 01:24:27 [logger.py:42] Received request cmpl-6ba67d396f2d4e5bb3621d98b9f9f09f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:27 [async_llm.py:261] Added request cmpl-6ba67d396f2d4e5bb3621d98b9f9f09f-0.
INFO 03-02 01:24:28 [logger.py:42] Received request cmpl-49fbff5218f94e6ba1f19659cf6ca5bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:28 [async_llm.py:261] Added request cmpl-49fbff5218f94e6ba1f19659cf6ca5bc-0.
INFO 03-02 01:24:29 [logger.py:42] Received request cmpl-494d5a77881b4cdbbe7b48d5180a15de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:29 [async_llm.py:261] Added request cmpl-494d5a77881b4cdbbe7b48d5180a15de-0.
INFO 03-02 01:24:31 [logger.py:42] Received request cmpl-07263f0b0af3461f82d4d1efc0fe8c3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:31 [async_llm.py:261] Added request cmpl-07263f0b0af3461f82d4d1efc0fe8c3a-0.
INFO 03-02 01:24:32 [logger.py:42] Received request cmpl-2474b8c155184a0ca035dd48c34f4400-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:32 [async_llm.py:261] Added request cmpl-2474b8c155184a0ca035dd48c34f4400-0.
INFO 03-02 01:24:33 [logger.py:42] Received request cmpl-85e44ba3fc7b491bbf19c084ef08a30c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:33 [async_llm.py:261] Added request cmpl-85e44ba3fc7b491bbf19c084ef08a30c-0.
INFO 03-02 01:24:34 [logger.py:42] Received request cmpl-6d0d3731c37e44259a68135a1a29fab7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:34 [async_llm.py:261] Added request cmpl-6d0d3731c37e44259a68135a1a29fab7-0.
INFO 03-02 01:24:35 [logger.py:42] Received request cmpl-dddac4a3cac24829b519972dfedb0e59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:35 [async_llm.py:261] Added request cmpl-dddac4a3cac24829b519972dfedb0e59-0.
INFO 03-02 01:24:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:24:36 [logger.py:42] Received request cmpl-6aaf3d116b44446090a1a177c21fbaae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:36 [async_llm.py:261] Added request cmpl-6aaf3d116b44446090a1a177c21fbaae-0.
INFO 03-02 01:24:37 [logger.py:42] Received request cmpl-e9402d53e0064106b97038bc5df7077f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:37 [async_llm.py:261] Added request cmpl-e9402d53e0064106b97038bc5df7077f-0.
INFO 03-02 01:24:38 [logger.py:42] Received request cmpl-6943dd21b7424381b69c514282e0a743-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:38 [async_llm.py:261] Added request cmpl-6943dd21b7424381b69c514282e0a743-0.
INFO 03-02 01:24:39 [logger.py:42] Received request cmpl-861e2f9a3dde488c8b25e2f25196ff5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:39 [async_llm.py:261] Added request cmpl-861e2f9a3dde488c8b25e2f25196ff5e-0.
INFO 03-02 01:24:40 [logger.py:42] Received request cmpl-06035d07490147318b0d82d29b1d7c8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:40 [async_llm.py:261] Added request cmpl-06035d07490147318b0d82d29b1d7c8f-0.
INFO 03-02 01:24:41 [logger.py:42] Received request cmpl-0397a4a63e1348889d17482d6ed6eb76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:41 [async_llm.py:261] Added request cmpl-0397a4a63e1348889d17482d6ed6eb76-0.
INFO 03-02 01:24:42 [logger.py:42] Received request cmpl-f5fbd324a81543adaead8be574019371-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:42 [async_llm.py:261] Added request cmpl-f5fbd324a81543adaead8be574019371-0.
INFO 03-02 01:24:44 [logger.py:42] Received request cmpl-2168863eba8040659b3e89fa06f0a724-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:44 [async_llm.py:261] Added request cmpl-2168863eba8040659b3e89fa06f0a724-0.
INFO 03-02 01:24:45 [logger.py:42] Received request cmpl-f8ea8552f2b741479bc4b0356169f82d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:45 [async_llm.py:261] Added request cmpl-f8ea8552f2b741479bc4b0356169f82d-0.
INFO 03-02 01:24:46 [logger.py:42] Received request cmpl-739d21dca26e46a685ddbae2112c3856-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:46 [async_llm.py:261] Added request cmpl-739d21dca26e46a685ddbae2112c3856-0.
INFO 03-02 01:24:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:24:47 [logger.py:42] Received request cmpl-55befabd3125497b875751fe7ca4f5be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:47 [async_llm.py:261] Added request cmpl-55befabd3125497b875751fe7ca4f5be-0.
INFO 03-02 01:24:48 [logger.py:42] Received request cmpl-9fc09108bfd04525b1678970e2639b85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:48 [async_llm.py:261] Added request cmpl-9fc09108bfd04525b1678970e2639b85-0.
INFO 03-02 01:24:49 [logger.py:42] Received request cmpl-af4a2f14d9d742e9a3e4e5cb12edebe6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:49 [async_llm.py:261] Added request cmpl-af4a2f14d9d742e9a3e4e5cb12edebe6-0.
INFO 03-02 01:24:50 [logger.py:42] Received request cmpl-21cac4e54eec41e49ee55d2759e5e9cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:50 [async_llm.py:261] Added request cmpl-21cac4e54eec41e49ee55d2759e5e9cf-0.
INFO 03-02 01:24:51 [logger.py:42] Received request cmpl-79e5e395f73f414c8fb609d989180338-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:51 [async_llm.py:261] Added request cmpl-79e5e395f73f414c8fb609d989180338-0.
INFO 03-02 01:24:52 [logger.py:42] Received request cmpl-5cabfccbb3db4904b01dd73156801872-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:52 [async_llm.py:261] Added request cmpl-5cabfccbb3db4904b01dd73156801872-0.
INFO 03-02 01:24:53 [logger.py:42] Received request cmpl-fd246fe6ac6e4b5c88b621a1dffd69c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:53 [async_llm.py:261] Added request cmpl-fd246fe6ac6e4b5c88b621a1dffd69c5-0.
INFO 03-02 01:24:54 [logger.py:42] Received request cmpl-4b0d4097cca8448aa6c74884bd8fe51e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:54 [async_llm.py:261] Added request cmpl-4b0d4097cca8448aa6c74884bd8fe51e-0.
INFO 03-02 01:24:56 [logger.py:42] Received request cmpl-0ad3894737a3496db0f9d06356829f42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:56 [async_llm.py:261] Added request cmpl-0ad3894737a3496db0f9d06356829f42-0.
INFO 03-02 01:24:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:24:57 [logger.py:42] Received request cmpl-d6ab1360248e474488f2f1aa1158d51f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:57 [async_llm.py:261] Added request cmpl-d6ab1360248e474488f2f1aa1158d51f-0.
INFO 03-02 01:24:58 [logger.py:42] Received request cmpl-2cd65b509caf441da89dc60aa93875e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:58 [async_llm.py:261] Added request cmpl-2cd65b509caf441da89dc60aa93875e4-0.
INFO 03-02 01:24:59 [logger.py:42] Received request cmpl-75fb4cc7e92d4891897cce464454e180-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:59 [async_llm.py:261] Added request cmpl-75fb4cc7e92d4891897cce464454e180-0.
INFO 03-02 01:25:00 [logger.py:42] Received request cmpl-b7ea910f1b0d4a9c9c297f08260d2f62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:00 [async_llm.py:261] Added request cmpl-b7ea910f1b0d4a9c9c297f08260d2f62-0.
INFO 03-02 01:25:01 [logger.py:42] Received request cmpl-b29c2982e31648b1ac74ce46b9bb9813-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:01 [async_llm.py:261] Added request cmpl-b29c2982e31648b1ac74ce46b9bb9813-0.
INFO 03-02 01:25:02 [logger.py:42] Received request cmpl-59e40e74dacd40a0bfe23c104736c6f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:02 [async_llm.py:261] Added request cmpl-59e40e74dacd40a0bfe23c104736c6f3-0.
INFO 03-02 01:25:03 [logger.py:42] Received request cmpl-308f3d5d939042a6b09c10eb46ffe597-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:03 [async_llm.py:261] Added request cmpl-308f3d5d939042a6b09c10eb46ffe597-0.
INFO 03-02 01:25:04 [logger.py:42] Received request cmpl-4f75f367166444daa4a6cca99f319010-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:04 [async_llm.py:261] Added request cmpl-4f75f367166444daa4a6cca99f319010-0.
INFO 03-02 01:25:05 [logger.py:42] Received request cmpl-4aaa567bac714d988f026619943408c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:05 [async_llm.py:261] Added request cmpl-4aaa567bac714d988f026619943408c6-0.
INFO 03-02 01:25:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:25:06 [logger.py:42] Received request cmpl-98ecc2373b6a4b6eb3b0fc0dc5e4fa7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:06 [async_llm.py:261] Added request cmpl-98ecc2373b6a4b6eb3b0fc0dc5e4fa7b-0.
INFO 03-02 01:25:07 [logger.py:42] Received request cmpl-d199398d268544ad885074de7a7a250a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:07 [async_llm.py:261] Added request cmpl-d199398d268544ad885074de7a7a250a-0.
INFO 03-02 01:25:09 [logger.py:42] Received request cmpl-f8048fbf283f4a33a990b2dde758b0e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:09 [async_llm.py:261] Added request cmpl-f8048fbf283f4a33a990b2dde758b0e9-0.
INFO 03-02 01:25:10 [logger.py:42] Received request cmpl-6db26e9140c346d1a22d1e271898e8a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:10 [async_llm.py:261] Added request cmpl-6db26e9140c346d1a22d1e271898e8a2-0.
INFO 03-02 01:25:11 [logger.py:42] Received request cmpl-71afed99776e46efb99c605ecd42908f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:11 [async_llm.py:261] Added request cmpl-71afed99776e46efb99c605ecd42908f-0.
INFO 03-02 01:25:12 [logger.py:42] Received request cmpl-ca7a43b6782542b9bc70fd0153b19ef9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:12 [async_llm.py:261] Added request cmpl-ca7a43b6782542b9bc70fd0153b19ef9-0.
INFO 03-02 01:25:13 [logger.py:42] Received request cmpl-111f3e362bb5481e8f94cb540645deb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:13 [async_llm.py:261] Added request cmpl-111f3e362bb5481e8f94cb540645deb6-0.
INFO 03-02 01:25:14 [logger.py:42] Received request cmpl-c7b1895ec3614c1ba4996eb282ad57ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:14 [async_llm.py:261] Added request cmpl-c7b1895ec3614c1ba4996eb282ad57ec-0.
INFO 03-02 01:25:15 [logger.py:42] Received request cmpl-0c7db38f07be4c88a00750e5cda4c6a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:15 [async_llm.py:261] Added request cmpl-0c7db38f07be4c88a00750e5cda4c6a5-0.
INFO 03-02 01:25:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:25:16 [logger.py:42] Received request cmpl-ab0687b8091241cc86967327fd76c156-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:16 [async_llm.py:261] Added request cmpl-ab0687b8091241cc86967327fd76c156-0.
INFO 03-02 01:25:17 [logger.py:42] Received request cmpl-dd5a2e6284c0419f842887619de31f98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:17 [async_llm.py:261] Added request cmpl-dd5a2e6284c0419f842887619de31f98-0.
INFO 03-02 01:25:18 [logger.py:42] Received request cmpl-b4b47aa66c7a4a0ab87016f5f279f808-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:18 [async_llm.py:261] Added request cmpl-b4b47aa66c7a4a0ab87016f5f279f808-0.
INFO 03-02 01:25:19 [logger.py:42] Received request cmpl-1dbb7ea2920f4c368d89e102a2051351-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:19 [async_llm.py:261] Added request cmpl-1dbb7ea2920f4c368d89e102a2051351-0.
INFO 03-02 01:25:20 [logger.py:42] Received request cmpl-5df6d62ada5744ab93023a2c24d46ef3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:20 [async_llm.py:261] Added request cmpl-5df6d62ada5744ab93023a2c24d46ef3-0.
INFO 03-02 01:25:22 [logger.py:42] Received request cmpl-e9edf7483b9043fe9890c637f87697bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:22 [async_llm.py:261] Added request cmpl-e9edf7483b9043fe9890c637f87697bb-0.
INFO 03-02 01:25:23 [logger.py:42] Received request cmpl-9e7706775c904723b3421bac7d8adbcf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:23 [async_llm.py:261] Added request cmpl-9e7706775c904723b3421bac7d8adbcf-0.
INFO 03-02 01:25:24 [logger.py:42] Received request cmpl-7cf1196b6a94412dad233a11cd6d574e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:24 [async_llm.py:261] Added request cmpl-7cf1196b6a94412dad233a11cd6d574e-0.
INFO 03-02 01:25:25 [logger.py:42] Received request cmpl-5c13b627f228440196daf8d5a1ad540c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:25 [async_llm.py:261] Added request cmpl-5c13b627f228440196daf8d5a1ad540c-0.
INFO 03-02 01:25:26 [logger.py:42] Received request cmpl-13471938cc754337bd5a2c421fdead27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:26 [async_llm.py:261] Added request cmpl-13471938cc754337bd5a2c421fdead27-0.
INFO 03-02 01:25:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:25:27 [logger.py:42] Received request cmpl-b6192dde9c0e43aa8a64ab561a49226d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:27 [async_llm.py:261] Added request cmpl-b6192dde9c0e43aa8a64ab561a49226d-0.
INFO 03-02 01:25:28 [logger.py:42] Received request cmpl-46de6a7453d640c8b8f86953cf87a304-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:28 [async_llm.py:261] Added request cmpl-46de6a7453d640c8b8f86953cf87a304-0.
INFO 03-02 01:25:29 [logger.py:42] Received request cmpl-a11e5eee9bc34fe5b3460323bfdf920c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:29 [async_llm.py:261] Added request cmpl-a11e5eee9bc34fe5b3460323bfdf920c-0.
INFO 03-02 01:25:30 [logger.py:42] Received request cmpl-370a9dd31b77469db81ca0c1a772bda0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:30 [async_llm.py:261] Added request cmpl-370a9dd31b77469db81ca0c1a772bda0-0.
INFO 03-02 01:25:31 [logger.py:42] Received request cmpl-dfab705f19d2407ea1a2221d709991ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:31 [async_llm.py:261] Added request cmpl-dfab705f19d2407ea1a2221d709991ea-0.
INFO 03-02 01:25:32 [logger.py:42] Received request cmpl-0ac189447ff84d0e83415fa2a617e74f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:32 [async_llm.py:261] Added request cmpl-0ac189447ff84d0e83415fa2a617e74f-0.
INFO 03-02 01:25:33 [logger.py:42] Received request cmpl-b34bedc3db4646b9864934353b0ba316-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:33 [async_llm.py:261] Added request cmpl-b34bedc3db4646b9864934353b0ba316-0.
INFO 03-02 01:25:35 [logger.py:42] Received request cmpl-d3fde146d8aa42088c79a18a051904d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:35 [async_llm.py:261] Added request cmpl-d3fde146d8aa42088c79a18a051904d0-0.
INFO 03-02 01:25:36 [logger.py:42] Received request cmpl-426f9a5754344081a9f94884e52f2b31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:36 [async_llm.py:261] Added request cmpl-426f9a5754344081a9f94884e52f2b31-0.
INFO 03-02 01:25:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:25:37 [logger.py:42] Received request cmpl-e5c8394333c64eb2b1a24701477e310c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:37 [async_llm.py:261] Added request cmpl-e5c8394333c64eb2b1a24701477e310c-0.
INFO 03-02 01:25:38 [logger.py:42] Received request cmpl-ae014146d52d4927a3f8d05d352b7eac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:38 [async_llm.py:261] Added request cmpl-ae014146d52d4927a3f8d05d352b7eac-0.
INFO 03-02 01:25:39 [logger.py:42] Received request cmpl-0b4c37fd3e384821a7c105aa7c472f41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:39 [async_llm.py:261] Added request cmpl-0b4c37fd3e384821a7c105aa7c472f41-0.
INFO 03-02 01:25:40 [logger.py:42] Received request cmpl-38e7bad7eb1446608dd931ec449ee5c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:40 [async_llm.py:261] Added request cmpl-38e7bad7eb1446608dd931ec449ee5c5-0.
INFO 03-02 01:25:41 [logger.py:42] Received request cmpl-11dddbd57c094232b7b2b25ef2c23ca1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:41 [async_llm.py:261] Added request cmpl-11dddbd57c094232b7b2b25ef2c23ca1-0.
INFO 03-02 01:25:42 [logger.py:42] Received request cmpl-949bd2e04b52408eb76df7269696323d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:42 [async_llm.py:261] Added request cmpl-949bd2e04b52408eb76df7269696323d-0.
INFO 03-02 01:25:43 [logger.py:42] Received request cmpl-0f1fce7938d94e47a3ffcf2275f0a63b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:43 [async_llm.py:261] Added request cmpl-0f1fce7938d94e47a3ffcf2275f0a63b-0.
INFO 03-02 01:25:44 [logger.py:42] Received request cmpl-ed45ce53e7d94d5181feca2c3db7998f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:44 [async_llm.py:261] Added request cmpl-ed45ce53e7d94d5181feca2c3db7998f-0.
INFO 03-02 01:25:45 [logger.py:42] Received request cmpl-f815d4e810324cd0885e4ff543f929fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:45 [async_llm.py:261] Added request cmpl-f815d4e810324cd0885e4ff543f929fd-0.
INFO 03-02 01:25:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:25:46 [logger.py:42] Received request cmpl-ed4685fe9e5d45d58b4e27fb3e279179-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:46 [async_llm.py:261] Added request cmpl-ed4685fe9e5d45d58b4e27fb3e279179-0.
INFO 03-02 01:25:48 [logger.py:42] Received request cmpl-4a537055311f4969ab24abe7734cdd81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:48 [async_llm.py:261] Added request cmpl-4a537055311f4969ab24abe7734cdd81-0.
INFO 03-02 01:25:49 [logger.py:42] Received request cmpl-0beaf151b39847f2a22cbb3d711c8312-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:49 [async_llm.py:261] Added request cmpl-0beaf151b39847f2a22cbb3d711c8312-0.
INFO 03-02 01:25:50 [logger.py:42] Received request cmpl-6aee8c274b5848e4afdebef6b02cdc6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:50 [async_llm.py:261] Added request cmpl-6aee8c274b5848e4afdebef6b02cdc6f-0.
INFO 03-02 01:25:51 [logger.py:42] Received request cmpl-0b80230b46964932bb447d6161c0815f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:51 [async_llm.py:261] Added request cmpl-0b80230b46964932bb447d6161c0815f-0.
INFO 03-02 01:25:52 [logger.py:42] Received request cmpl-4a7f220ba3d046ce928ca55c0391bd67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:52 [async_llm.py:261] Added request cmpl-4a7f220ba3d046ce928ca55c0391bd67-0.
INFO 03-02 01:25:53 [logger.py:42] Received request cmpl-8b0abdaf7d7348a58dc2d1e1a592add1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:53 [async_llm.py:261] Added request cmpl-8b0abdaf7d7348a58dc2d1e1a592add1-0.
INFO 03-02 01:25:54 [logger.py:42] Received request cmpl-f9a96a60b3f648a9b497552ea9805553-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:54 [async_llm.py:261] Added request cmpl-f9a96a60b3f648a9b497552ea9805553-0.
INFO 03-02 01:25:55 [logger.py:42] Received request cmpl-337394e251dd4c3eb5f942a60576a121-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:55 [async_llm.py:261] Added request cmpl-337394e251dd4c3eb5f942a60576a121-0.
INFO 03-02 01:25:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:25:56 [logger.py:42] Received request cmpl-7d491410aef142ed978c3fe9f53b0d37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:56 [async_llm.py:261] Added request cmpl-7d491410aef142ed978c3fe9f53b0d37-0.
INFO 03-02 01:25:57 [logger.py:42] Received request cmpl-e268e7f6b23947d9900beb06291a09d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:57 [async_llm.py:261] Added request cmpl-e268e7f6b23947d9900beb06291a09d9-0.
INFO 03-02 01:25:58 [logger.py:42] Received request cmpl-aa5155e8e2834ecfb07990832e69afbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:58 [async_llm.py:261] Added request cmpl-aa5155e8e2834ecfb07990832e69afbd-0.
INFO 03-02 01:25:59 [logger.py:42] Received request cmpl-2bcbe38f4c8043e6a8377d8079a223b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:59 [async_llm.py:261] Added request cmpl-2bcbe38f4c8043e6a8377d8079a223b7-0.
INFO 03-02 01:26:01 [logger.py:42] Received request cmpl-a327e5b2dc594b5fb15df94c1d42b574-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:01 [async_llm.py:261] Added request cmpl-a327e5b2dc594b5fb15df94c1d42b574-0.
INFO 03-02 01:26:02 [logger.py:42] Received request cmpl-c410820277984a02954017d262b7e7d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:02 [async_llm.py:261] Added request cmpl-c410820277984a02954017d262b7e7d8-0.
INFO 03-02 01:26:03 [logger.py:42] Received request cmpl-c7e3ce390149470a91ac63342b885894-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:03 [async_llm.py:261] Added request cmpl-c7e3ce390149470a91ac63342b885894-0.
INFO 03-02 01:26:04 [logger.py:42] Received request cmpl-2e3d84478c66428a8f84924e93597d74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:04 [async_llm.py:261] Added request cmpl-2e3d84478c66428a8f84924e93597d74-0.
INFO 03-02 01:26:05 [logger.py:42] Received request cmpl-a6869c43520b4f5aaf8ef13483a10720-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:05 [async_llm.py:261] Added request cmpl-a6869c43520b4f5aaf8ef13483a10720-0.
INFO 03-02 01:26:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:26:06 [logger.py:42] Received request cmpl-469ab9ee98d84970b4ae19e9e0eb8e3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:06 [async_llm.py:261] Added request cmpl-469ab9ee98d84970b4ae19e9e0eb8e3d-0.
INFO 03-02 01:26:07 [logger.py:42] Received request cmpl-9d6fb8bb869d4f6dae6fbdb4a98342d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:07 [async_llm.py:261] Added request cmpl-9d6fb8bb869d4f6dae6fbdb4a98342d1-0.
INFO 03-02 01:26:08 [logger.py:42] Received request cmpl-8f4c0b0c80444513b9ffc859c0547ca6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:08 [async_llm.py:261] Added request cmpl-8f4c0b0c80444513b9ffc859c0547ca6-0.
INFO 03-02 01:26:09 [logger.py:42] Received request cmpl-f371dac3a5c14ca2ad80a0971f20e267-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:09 [async_llm.py:261] Added request cmpl-f371dac3a5c14ca2ad80a0971f20e267-0.
INFO 03-02 01:26:10 [logger.py:42] Received request cmpl-c6bb1047df394d8aa95ee454d4531c8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:10 [async_llm.py:261] Added request cmpl-c6bb1047df394d8aa95ee454d4531c8d-0.
INFO 03-02 01:26:11 [logger.py:42] Received request cmpl-c456158b3ab848f8be8beb0a0854b0d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:11 [async_llm.py:261] Added request cmpl-c456158b3ab848f8be8beb0a0854b0d0-0.
INFO 03-02 01:26:13 [logger.py:42] Received request cmpl-59e9da9696a943ac867ed0a82ddaea70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:13 [async_llm.py:261] Added request cmpl-59e9da9696a943ac867ed0a82ddaea70-0.
INFO 03-02 01:26:14 [logger.py:42] Received request cmpl-10d04284a25c43a686aa9d709cd258b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:14 [async_llm.py:261] Added request cmpl-10d04284a25c43a686aa9d709cd258b8-0.
INFO 03-02 01:26:15 [logger.py:42] Received request cmpl-8ed3c2d3ec21417e9fb3f5cd7623ee8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:15 [async_llm.py:261] Added request cmpl-8ed3c2d3ec21417e9fb3f5cd7623ee8f-0.
INFO 03-02 01:26:16 [logger.py:42] Received request cmpl-b1b6d159b779491082c864f5948937c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:16 [async_llm.py:261] Added request cmpl-b1b6d159b779491082c864f5948937c4-0.
INFO 03-02 01:26:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:26:17 [logger.py:42] Received request cmpl-37367ddd8cf645c2a2579cd04c4c3090-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:17 [async_llm.py:261] Added request cmpl-37367ddd8cf645c2a2579cd04c4c3090-0.
INFO 03-02 01:26:18 [logger.py:42] Received request cmpl-3f8344aaba2d4013bf2fab7e84f6e7e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:18 [async_llm.py:261] Added request cmpl-3f8344aaba2d4013bf2fab7e84f6e7e4-0.
INFO 03-02 01:26:19 [logger.py:42] Received request cmpl-0d20b9c347db403e83722e244e0849d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:19 [async_llm.py:261] Added request cmpl-0d20b9c347db403e83722e244e0849d5-0.
INFO 03-02 01:26:20 [logger.py:42] Received request cmpl-c9acd2f617e643f2b785e013692f2671-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:20 [async_llm.py:261] Added request cmpl-c9acd2f617e643f2b785e013692f2671-0.
INFO 03-02 01:26:21 [logger.py:42] Received request cmpl-d17454fabd984bd6a1f28005c66c2b6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:21 [async_llm.py:261] Added request cmpl-d17454fabd984bd6a1f28005c66c2b6a-0.
INFO 03-02 01:26:22 [logger.py:42] Received request cmpl-f7a0ad6ed11a4ba5a8b733e8a4d62299-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:22 [async_llm.py:261] Added request cmpl-f7a0ad6ed11a4ba5a8b733e8a4d62299-0.
INFO 03-02 01:26:23 [logger.py:42] Received request cmpl-7ae084d962cc45fd9f77910073de3382-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:23 [async_llm.py:261] Added request cmpl-7ae084d962cc45fd9f77910073de3382-0.
INFO 03-02 01:26:24 [logger.py:42] Received request cmpl-c605c680ca144d3eac93ff900d030487-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:24 [async_llm.py:261] Added request cmpl-c605c680ca144d3eac93ff900d030487-0.
INFO 03-02 01:26:26 [logger.py:42] Received request cmpl-4f61f5ee327c49dea3afadd2f0551650-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:26 [async_llm.py:261] Added request cmpl-4f61f5ee327c49dea3afadd2f0551650-0.
INFO 03-02 01:26:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:26:27 [logger.py:42] Received request cmpl-302b145c5fda4df1b2f1b738073f8ad4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:27 [async_llm.py:261] Added request cmpl-302b145c5fda4df1b2f1b738073f8ad4-0.
INFO 03-02 01:26:28 [logger.py:42] Received request cmpl-1b4de3e7df9c4b1c8464d80ac6fe347f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:28 [async_llm.py:261] Added request cmpl-1b4de3e7df9c4b1c8464d80ac6fe347f-0.
INFO 03-02 01:26:29 [logger.py:42] Received request cmpl-e211dcf5234d48dba106e3d777ca621e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:29 [async_llm.py:261] Added request cmpl-e211dcf5234d48dba106e3d777ca621e-0.
INFO 03-02 01:26:30 [logger.py:42] Received request cmpl-827aaff3afdd4064b6ac87974049eee9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:30 [async_llm.py:261] Added request cmpl-827aaff3afdd4064b6ac87974049eee9-0.
INFO 03-02 01:26:31 [logger.py:42] Received request cmpl-bdae1163f5f14e41bfde1e3cd9fb8a2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:31 [async_llm.py:261] Added request cmpl-bdae1163f5f14e41bfde1e3cd9fb8a2f-0.
INFO 03-02 01:26:32 [logger.py:42] Received request cmpl-31b6f09256534d1298a3db2df43913f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:32 [async_llm.py:261] Added request cmpl-31b6f09256534d1298a3db2df43913f1-0.
INFO 03-02 01:26:33 [logger.py:42] Received request cmpl-139ea0c0c3fe4d0c8754689901bdfe9f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:33 [async_llm.py:261] Added request cmpl-139ea0c0c3fe4d0c8754689901bdfe9f-0.
INFO 03-02 01:26:34 [logger.py:42] Received request cmpl-f329613e4e524c31bea6d9dcaf4da96c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:34 [async_llm.py:261] Added request cmpl-f329613e4e524c31bea6d9dcaf4da96c-0.
INFO 03-02 01:26:35 [logger.py:42] Received request cmpl-a44f92f5a6544553b096a32c66340787-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:35 [async_llm.py:261] Added request cmpl-a44f92f5a6544553b096a32c66340787-0.
INFO 03-02 01:26:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:26:36 [logger.py:42] Received request cmpl-df0a5dd6dba1423482b9a2eee3704e33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:36 [async_llm.py:261] Added request cmpl-df0a5dd6dba1423482b9a2eee3704e33-0.
INFO 03-02 01:26:37 [logger.py:42] Received request cmpl-2be29c3decf7476bb3cab2b4e2b9bd15-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:37 [async_llm.py:261] Added request cmpl-2be29c3decf7476bb3cab2b4e2b9bd15-0.
INFO 03-02 01:26:39 [logger.py:42] Received request cmpl-65b153d5acca4a6981c70e19076b5abe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:39 [async_llm.py:261] Added request cmpl-65b153d5acca4a6981c70e19076b5abe-0.
INFO 03-02 01:26:40 [logger.py:42] Received request cmpl-45802863fa4c47e093e4276eb3708d13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:40 [async_llm.py:261] Added request cmpl-45802863fa4c47e093e4276eb3708d13-0.
INFO 03-02 01:26:41 [logger.py:42] Received request cmpl-7ac39cb849d545b1bb9b7b0c34aefc52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:41 [async_llm.py:261] Added request cmpl-7ac39cb849d545b1bb9b7b0c34aefc52-0.
INFO 03-02 01:26:42 [logger.py:42] Received request cmpl-210f7b0b90c64ac49ca1a1d5004636c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:42 [async_llm.py:261] Added request cmpl-210f7b0b90c64ac49ca1a1d5004636c2-0.
INFO 03-02 01:26:43 [logger.py:42] Received request cmpl-d346fcfe4760490cbfe6035a68dea82f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:43 [async_llm.py:261] Added request cmpl-d346fcfe4760490cbfe6035a68dea82f-0.
INFO 03-02 01:26:44 [logger.py:42] Received request cmpl-38ca46bf69984c13bd1184fd4afb37ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:44 [async_llm.py:261] Added request cmpl-38ca46bf69984c13bd1184fd4afb37ef-0.
INFO 03-02 01:26:45 [logger.py:42] Received request cmpl-ee5c6ca8938347bfb40212d1f8852b8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:45 [async_llm.py:261] Added request cmpl-ee5c6ca8938347bfb40212d1f8852b8b-0.
INFO 03-02 01:26:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:26:46 [logger.py:42] Received request cmpl-d366447feb7d4627bea004da5cd022e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:46 [async_llm.py:261] Added request cmpl-d366447feb7d4627bea004da5cd022e8-0.
INFO 03-02 01:26:47 [logger.py:42] Received request cmpl-65d576b0aca84346b77a5e1330b6be35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:47 [async_llm.py:261] Added request cmpl-65d576b0aca84346b77a5e1330b6be35-0.
INFO 03-02 01:26:48 [logger.py:42] Received request cmpl-f7103512c26948b087a0c31d87ceccab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:48 [async_llm.py:261] Added request cmpl-f7103512c26948b087a0c31d87ceccab-0.
INFO 03-02 01:26:49 [logger.py:42] Received request cmpl-5c51627027a34d43b765246bdd079650-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:49 [async_llm.py:261] Added request cmpl-5c51627027a34d43b765246bdd079650-0.
INFO 03-02 01:26:50 [logger.py:42] Received request cmpl-4dba030157e646d584eb5182d8e9a063-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:50 [async_llm.py:261] Added request cmpl-4dba030157e646d584eb5182d8e9a063-0.
INFO 03-02 01:26:52 [logger.py:42] Received request cmpl-c6a97980871a4e2c914ff1bd5caeb25c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:52 [async_llm.py:261] Added request cmpl-c6a97980871a4e2c914ff1bd5caeb25c-0.
INFO 03-02 01:26:53 [logger.py:42] Received request cmpl-ad21b81ff0ae439a88b2aa4dde508c1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:53 [async_llm.py:261] Added request cmpl-ad21b81ff0ae439a88b2aa4dde508c1a-0.
INFO 03-02 01:26:54 [logger.py:42] Received request cmpl-7f41a1bceff648baa02110a2c40d975e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:54 [async_llm.py:261] Added request cmpl-7f41a1bceff648baa02110a2c40d975e-0.
INFO 03-02 01:26:55 [logger.py:42] Received request cmpl-21ecf60e35b24fc28ae44e4f2182f5a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:55 [async_llm.py:261] Added request cmpl-21ecf60e35b24fc28ae44e4f2182f5a0-0.
INFO 03-02 01:26:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:26:56 [logger.py:42] Received request cmpl-1feeae348d5d4815a3fb570c3c7a76aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:56 [async_llm.py:261] Added request cmpl-1feeae348d5d4815a3fb570c3c7a76aa-0.
INFO 03-02 01:26:57 [logger.py:42] Received request cmpl-2d3a22d739b1433ab9e7a967dcbd4b9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:57 [async_llm.py:261] Added request cmpl-2d3a22d739b1433ab9e7a967dcbd4b9b-0.
INFO 03-02 01:26:58 [logger.py:42] Received request cmpl-f5dc8f31caba482d9abb2923f47554f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:58 [async_llm.py:261] Added request cmpl-f5dc8f31caba482d9abb2923f47554f9-0.
INFO 03-02 01:26:59 [logger.py:42] Received request cmpl-c4ac445da5b84046bc4bea78d634992a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:59 [async_llm.py:261] Added request cmpl-c4ac445da5b84046bc4bea78d634992a-0.
INFO 03-02 01:27:00 [logger.py:42] Received request cmpl-40996052ebd34467997cd08b8734eb04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:00 [async_llm.py:261] Added request cmpl-40996052ebd34467997cd08b8734eb04-0.
INFO 03-02 01:27:01 [logger.py:42] Received request cmpl-49811c45c35443728e63435c0215f6ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:01 [async_llm.py:261] Added request cmpl-49811c45c35443728e63435c0215f6ba-0.
INFO 03-02 01:27:02 [logger.py:42] Received request cmpl-7794f62849ef4d5c8aae68393cbfcce8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:02 [async_llm.py:261] Added request cmpl-7794f62849ef4d5c8aae68393cbfcce8-0.
INFO 03-02 01:27:04 [logger.py:42] Received request cmpl-64a2e4c3098640368e76c859f5471a94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:04 [async_llm.py:261] Added request cmpl-64a2e4c3098640368e76c859f5471a94-0.
INFO 03-02 01:27:05 [logger.py:42] Received request cmpl-482f59f4818c44bda7e714298a3fbd25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:05 [async_llm.py:261] Added request cmpl-482f59f4818c44bda7e714298a3fbd25-0.
INFO 03-02 01:27:06 [logger.py:42] Received request cmpl-f5922d465e4d4bd8b47abeb16321be1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:06 [async_llm.py:261] Added request cmpl-f5922d465e4d4bd8b47abeb16321be1e-0.
INFO 03-02 01:27:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:27:07 [logger.py:42] Received request cmpl-c1ebeb867c974319b2fd7b820c2c82f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:07 [async_llm.py:261] Added request cmpl-c1ebeb867c974319b2fd7b820c2c82f4-0.
INFO 03-02 01:27:08 [logger.py:42] Received request cmpl-ddc0e012abd540d39388d43bbdf516f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:08 [async_llm.py:261] Added request cmpl-ddc0e012abd540d39388d43bbdf516f0-0.
INFO 03-02 01:27:09 [logger.py:42] Received request cmpl-0635b3a3baa0440e84c393775a55618b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:09 [async_llm.py:261] Added request cmpl-0635b3a3baa0440e84c393775a55618b-0.
INFO 03-02 01:27:10 [logger.py:42] Received request cmpl-06191769fae74dd7ba50cff3e918d418-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:10 [async_llm.py:261] Added request cmpl-06191769fae74dd7ba50cff3e918d418-0.
INFO 03-02 01:27:11 [logger.py:42] Received request cmpl-b93321665598430c8776fb2dd77add18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:11 [async_llm.py:261] Added request cmpl-b93321665598430c8776fb2dd77add18-0.
INFO 03-02 01:27:12 [logger.py:42] Received request cmpl-03e5d4bc7ece4cd3b8d231a4a41ff4f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:12 [async_llm.py:261] Added request cmpl-03e5d4bc7ece4cd3b8d231a4a41ff4f2-0.
INFO 03-02 01:27:13 [logger.py:42] Received request cmpl-05a3e695afa84603894890972809fb8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:13 [async_llm.py:261] Added request cmpl-05a3e695afa84603894890972809fb8c-0.
INFO 03-02 01:27:14 [logger.py:42] Received request cmpl-96fdae4735504b8096335160d88672a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:14 [async_llm.py:261] Added request cmpl-96fdae4735504b8096335160d88672a4-0.
INFO 03-02 01:27:15 [logger.py:42] Received request cmpl-0acdfd231d784b2591f526c266ee0b33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:15 [async_llm.py:261] Added request cmpl-0acdfd231d784b2591f526c266ee0b33-0.
INFO 03-02 01:27:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:27:17 [logger.py:42] Received request cmpl-f74956c42b8c4cdf9537bb411cf3afb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:17 [async_llm.py:261] Added request cmpl-f74956c42b8c4cdf9537bb411cf3afb0-0.
INFO 03-02 01:27:18 [logger.py:42] Received request cmpl-629bae2bacdb48708030dccb15ce57bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:18 [async_llm.py:261] Added request cmpl-629bae2bacdb48708030dccb15ce57bc-0.
INFO 03-02 01:27:19 [logger.py:42] Received request cmpl-80e2b6323e2848f5a2b856e50314455a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:19 [async_llm.py:261] Added request cmpl-80e2b6323e2848f5a2b856e50314455a-0.
INFO 03-02 01:27:20 [logger.py:42] Received request cmpl-aef9cb94289745eb988ef14eac3f15be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:20 [async_llm.py:261] Added request cmpl-aef9cb94289745eb988ef14eac3f15be-0.
INFO 03-02 01:27:21 [logger.py:42] Received request cmpl-202ac3c638ff445fb402830092fe3c7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:21 [async_llm.py:261] Added request cmpl-202ac3c638ff445fb402830092fe3c7b-0.
INFO 03-02 01:27:22 [logger.py:42] Received request cmpl-2dd657b80ae34af5b70c2915f3337b6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:22 [async_llm.py:261] Added request cmpl-2dd657b80ae34af5b70c2915f3337b6c-0.
INFO 03-02 01:27:23 [logger.py:42] Received request cmpl-ff992631bc5e435a84a03e35419b7b4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:23 [async_llm.py:261] Added request cmpl-ff992631bc5e435a84a03e35419b7b4c-0.
INFO 03-02 01:27:24 [logger.py:42] Received request cmpl-c4fb89612c5c49c19aa67ff555d40612-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:24 [async_llm.py:261] Added request cmpl-c4fb89612c5c49c19aa67ff555d40612-0.
INFO 03-02 01:27:25 [logger.py:42] Received request cmpl-32371fce284e43d0a180fd48abd7cf26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:25 [async_llm.py:261] Added request cmpl-32371fce284e43d0a180fd48abd7cf26-0.
INFO 03-02 01:27:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:27:26 [logger.py:42] Received request cmpl-6a7e0b69621143ada544206c5413892b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:26 [async_llm.py:261] Added request cmpl-6a7e0b69621143ada544206c5413892b-0.
INFO 03-02 01:27:27 [logger.py:42] Received request cmpl-8e52e7722d0a4ae9ac7e903b5b3f21fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:27 [async_llm.py:261] Added request cmpl-8e52e7722d0a4ae9ac7e903b5b3f21fc-0.
INFO 03-02 01:27:28 [logger.py:42] Received request cmpl-8f0a0433a1de4ee9ade1b4aa6da1baa5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:28 [async_llm.py:261] Added request cmpl-8f0a0433a1de4ee9ade1b4aa6da1baa5-0.
INFO 03-02 01:27:30 [logger.py:42] Received request cmpl-c928d7c74fd14f3f8f01be2f71259437-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:30 [async_llm.py:261] Added request cmpl-c928d7c74fd14f3f8f01be2f71259437-0.
INFO 03-02 01:27:31 [logger.py:42] Received request cmpl-98560bc96a654d148fb712769f9e9368-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:31 [async_llm.py:261] Added request cmpl-98560bc96a654d148fb712769f9e9368-0.
INFO 03-02 01:27:32 [logger.py:42] Received request cmpl-e6e468df25e04244b6a3a40b36bf08e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:32 [async_llm.py:261] Added request cmpl-e6e468df25e04244b6a3a40b36bf08e6-0.
INFO 03-02 01:27:33 [logger.py:42] Received request cmpl-56611c7d09cf47aa8c57a7da66cf62c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:33 [async_llm.py:261] Added request cmpl-56611c7d09cf47aa8c57a7da66cf62c4-0.
INFO 03-02 01:27:34 [logger.py:42] Received request cmpl-57c2d83a5b954adb8dd7e4602ee70cf7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:34 [async_llm.py:261] Added request cmpl-57c2d83a5b954adb8dd7e4602ee70cf7-0.
INFO 03-02 01:27:35 [logger.py:42] Received request cmpl-318d71fdce2b4e98bdf5c842f19edfee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:35 [async_llm.py:261] Added request cmpl-318d71fdce2b4e98bdf5c842f19edfee-0.
INFO 03-02 01:27:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:27:36 [logger.py:42] Received request cmpl-90df2da35438452691f010fb46382f0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:36 [async_llm.py:261] Added request cmpl-90df2da35438452691f010fb46382f0e-0.
INFO 03-02 01:27:37 [logger.py:42] Received request cmpl-ca31cfb0515542748e0bc7602fd1cf2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:37 [async_llm.py:261] Added request cmpl-ca31cfb0515542748e0bc7602fd1cf2a-0.
INFO 03-02 01:27:38 [logger.py:42] Received request cmpl-39cd3902b116422e8baa0ef34cbe7027-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:38 [async_llm.py:261] Added request cmpl-39cd3902b116422e8baa0ef34cbe7027-0.
INFO 03-02 01:27:39 [logger.py:42] Received request cmpl-7742a9f1960b40ada035c993aa719c29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:39 [async_llm.py:261] Added request cmpl-7742a9f1960b40ada035c993aa719c29-0.
INFO 03-02 01:27:40 [logger.py:42] Received request cmpl-13de974ea54f43c3ab843b8429f11185-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:40 [async_llm.py:261] Added request cmpl-13de974ea54f43c3ab843b8429f11185-0.
INFO 03-02 01:27:41 [logger.py:42] Received request cmpl-0fa7d0f8815d448e9626d04fb636607a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:41 [async_llm.py:261] Added request cmpl-0fa7d0f8815d448e9626d04fb636607a-0.
INFO 03-02 01:27:43 [logger.py:42] Received request cmpl-52f6daf30153465b926d8f4d9c00a361-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:43 [async_llm.py:261] Added request cmpl-52f6daf30153465b926d8f4d9c00a361-0.
INFO 03-02 01:27:44 [logger.py:42] Received request cmpl-50a8eb02387a4a1f9a7c528a8f70dc94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:44 [async_llm.py:261] Added request cmpl-50a8eb02387a4a1f9a7c528a8f70dc94-0.
INFO 03-02 01:27:45 [logger.py:42] Received request cmpl-1dbc4df2d8f541d496a96e48bcbdb7c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:45 [async_llm.py:261] Added request cmpl-1dbc4df2d8f541d496a96e48bcbdb7c8-0.
INFO 03-02 01:27:46 [logger.py:42] Received request cmpl-112b232096d2453e92aff5b4075f4284-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:46 [async_llm.py:261] Added request cmpl-112b232096d2453e92aff5b4075f4284-0.
INFO 03-02 01:27:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:27:47 [logger.py:42] Received request cmpl-8a641c4c34494424a0590fe3d78a7803-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:47 [async_llm.py:261] Added request cmpl-8a641c4c34494424a0590fe3d78a7803-0.
INFO 03-02 01:27:48 [logger.py:42] Received request cmpl-79cfc50e868c4fe28e41b41fe8f6eff4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:48 [async_llm.py:261] Added request cmpl-79cfc50e868c4fe28e41b41fe8f6eff4-0.
INFO 03-02 01:27:49 [logger.py:42] Received request cmpl-dc13c0bbf05f40bb977eddec28a5ad87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:49 [async_llm.py:261] Added request cmpl-dc13c0bbf05f40bb977eddec28a5ad87-0.
INFO 03-02 01:27:50 [logger.py:42] Received request cmpl-e85a198f29dc495d8e448a631d9d54e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:50 [async_llm.py:261] Added request cmpl-e85a198f29dc495d8e448a631d9d54e1-0.
INFO 03-02 01:27:51 [logger.py:42] Received request cmpl-49361a9e891340698a94fc2e54a9368e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:51 [async_llm.py:261] Added request cmpl-49361a9e891340698a94fc2e54a9368e-0.
INFO 03-02 01:27:52 [logger.py:42] Received request cmpl-d610a073009c4f9197628325499e2b64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:52 [async_llm.py:261] Added request cmpl-d610a073009c4f9197628325499e2b64-0.
INFO 03-02 01:27:53 [logger.py:42] Received request cmpl-5d8b23853c384562ae251bd4fbe1056a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:53 [async_llm.py:261] Added request cmpl-5d8b23853c384562ae251bd4fbe1056a-0.
INFO 03-02 01:27:54 [logger.py:42] Received request cmpl-c0832fb027374201892391e158f3eca9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:54 [async_llm.py:261] Added request cmpl-c0832fb027374201892391e158f3eca9-0.
INFO 03-02 01:27:56 [logger.py:42] Received request cmpl-db4d37b82c0d4040925992a7bc4befd6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:56 [async_llm.py:261] Added request cmpl-db4d37b82c0d4040925992a7bc4befd6-0.
INFO 03-02 01:27:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:27:57 [logger.py:42] Received request cmpl-8d60867402124c33b02b9d9dfe02b201-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:57 [async_llm.py:261] Added request cmpl-8d60867402124c33b02b9d9dfe02b201-0.
INFO 03-02 01:27:58 [logger.py:42] Received request cmpl-07622c47d2e741838354d132c44c02a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:58 [async_llm.py:261] Added request cmpl-07622c47d2e741838354d132c44c02a1-0.
INFO 03-02 01:27:59 [logger.py:42] Received request cmpl-bf4848aa655e47c2b78343888249c428-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:59 [async_llm.py:261] Added request cmpl-bf4848aa655e47c2b78343888249c428-0.
INFO 03-02 01:28:00 [logger.py:42] Received request cmpl-5c09920840404097805669d96bc9087a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:00 [async_llm.py:261] Added request cmpl-5c09920840404097805669d96bc9087a-0.
INFO 03-02 01:28:01 [logger.py:42] Received request cmpl-0f3617fcc2bd454abc3f8b8b7dc28c0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:01 [async_llm.py:261] Added request cmpl-0f3617fcc2bd454abc3f8b8b7dc28c0e-0.
INFO 03-02 01:28:02 [logger.py:42] Received request cmpl-9811574c15104e4895acf8600ee2b8dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:02 [async_llm.py:261] Added request cmpl-9811574c15104e4895acf8600ee2b8dc-0.
INFO 03-02 01:28:03 [logger.py:42] Received request cmpl-69a60b7348e44d3b9a666a4e9b6c5f7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:03 [async_llm.py:261] Added request cmpl-69a60b7348e44d3b9a666a4e9b6c5f7a-0.
INFO 03-02 01:28:04 [logger.py:42] Received request cmpl-fbf4311528a140a3b42fc8a851d26e4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:04 [async_llm.py:261] Added request cmpl-fbf4311528a140a3b42fc8a851d26e4f-0.
INFO 03-02 01:28:05 [logger.py:42] Received request cmpl-06a45b29a5a54faba0ef6d5f26892343-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:05 [async_llm.py:261] Added request cmpl-06a45b29a5a54faba0ef6d5f26892343-0.
INFO 03-02 01:28:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:28:06 [logger.py:42] Received request cmpl-879ccd3b6b9e4ca5a10c1ba6d91afd16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:06 [async_llm.py:261] Added request cmpl-879ccd3b6b9e4ca5a10c1ba6d91afd16-0.
INFO 03-02 01:28:08 [logger.py:42] Received request cmpl-2f1b3f3d64d34a9683ec8b010c0989e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:08 [async_llm.py:261] Added request cmpl-2f1b3f3d64d34a9683ec8b010c0989e4-0.
INFO 03-02 01:28:09 [logger.py:42] Received request cmpl-5413a702644b49c3a63fc8ec846bbca0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:09 [async_llm.py:261] Added request cmpl-5413a702644b49c3a63fc8ec846bbca0-0.
INFO 03-02 01:28:10 [logger.py:42] Received request cmpl-a8dcad80349244818a212a9f2fb848a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:10 [async_llm.py:261] Added request cmpl-a8dcad80349244818a212a9f2fb848a1-0.
INFO 03-02 01:28:11 [logger.py:42] Received request cmpl-38c3feed6af54e679a3cb173a33bb7a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:11 [async_llm.py:261] Added request cmpl-38c3feed6af54e679a3cb173a33bb7a9-0.
INFO 03-02 01:28:12 [logger.py:42] Received request cmpl-3c507292f2884b2a8245c86bc6cb4ab7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:12 [async_llm.py:261] Added request cmpl-3c507292f2884b2a8245c86bc6cb4ab7-0.
INFO 03-02 01:28:13 [logger.py:42] Received request cmpl-bcc2401d67e34924a0b4c78d107f9626-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:13 [async_llm.py:261] Added request cmpl-bcc2401d67e34924a0b4c78d107f9626-0.
INFO 03-02 01:28:14 [logger.py:42] Received request cmpl-abfc592ae78e4167bec50b168056b77e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:14 [async_llm.py:261] Added request cmpl-abfc592ae78e4167bec50b168056b77e-0.
INFO 03-02 01:28:15 [logger.py:42] Received request cmpl-56c655874af84a38a4381dca7e795243-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:15 [async_llm.py:261] Added request cmpl-56c655874af84a38a4381dca7e795243-0.
INFO 03-02 01:28:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:28:16 [logger.py:42] Received request cmpl-b369ffbb39f34aed8a14d6797667bc2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:16 [async_llm.py:261] Added request cmpl-b369ffbb39f34aed8a14d6797667bc2d-0.
INFO 03-02 01:28:17 [logger.py:42] Received request cmpl-465eafc7498048aaa9f15a58617a9f62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:17 [async_llm.py:261] Added request cmpl-465eafc7498048aaa9f15a58617a9f62-0.
INFO 03-02 01:28:18 [logger.py:42] Received request cmpl-cc6c53b5c7c84806bf947ab8049469c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:18 [async_llm.py:261] Added request cmpl-cc6c53b5c7c84806bf947ab8049469c0-0.
INFO 03-02 01:28:19 [logger.py:42] Received request cmpl-4038758bf0f64232af507b1bb1809417-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:19 [async_llm.py:261] Added request cmpl-4038758bf0f64232af507b1bb1809417-0.
INFO 03-02 01:28:21 [logger.py:42] Received request cmpl-ca6d1ca496ad47aea3c70ddaa7d4116d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:21 [async_llm.py:261] Added request cmpl-ca6d1ca496ad47aea3c70ddaa7d4116d-0.
INFO 03-02 01:28:22 [logger.py:42] Received request cmpl-eee9ddd6ac664a3586c546166395b76f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:22 [async_llm.py:261] Added request cmpl-eee9ddd6ac664a3586c546166395b76f-0.
INFO 03-02 01:28:23 [logger.py:42] Received request cmpl-84c649d58e11456c968ff310bf2b7803-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:23 [async_llm.py:261] Added request cmpl-84c649d58e11456c968ff310bf2b7803-0.
INFO 03-02 01:28:24 [logger.py:42] Received request cmpl-9b32eb72fb824f6dbe79f91f392e4108-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:24 [async_llm.py:261] Added request cmpl-9b32eb72fb824f6dbe79f91f392e4108-0.
INFO 03-02 01:28:25 [logger.py:42] Received request cmpl-95e052af0df144968900fe0b254a71ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:25 [async_llm.py:261] Added request cmpl-95e052af0df144968900fe0b254a71ed-0.
INFO 03-02 01:28:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:28:26 [logger.py:42] Received request cmpl-bb2f0e20ef5d492a8b2ec51c06578f6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:26 [async_llm.py:261] Added request cmpl-bb2f0e20ef5d492a8b2ec51c06578f6c-0.
INFO 03-02 01:28:27 [logger.py:42] Received request cmpl-dd489ee0f56b495d9319f1320d192f5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:27 [async_llm.py:261] Added request cmpl-dd489ee0f56b495d9319f1320d192f5a-0.
INFO 03-02 01:28:28 [logger.py:42] Received request cmpl-936b35b9901d47aeb4f5c8a2dbae8643-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:28 [async_llm.py:261] Added request cmpl-936b35b9901d47aeb4f5c8a2dbae8643-0.
INFO 03-02 01:28:29 [logger.py:42] Received request cmpl-e55cf09874bd4c088ebccd86d9c53517-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:29 [async_llm.py:261] Added request cmpl-e55cf09874bd4c088ebccd86d9c53517-0.
INFO 03-02 01:28:30 [logger.py:42] Received request cmpl-d8eed80d6a634672b4aa1356169a28c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:30 [async_llm.py:261] Added request cmpl-d8eed80d6a634672b4aa1356169a28c6-0.
INFO 03-02 01:28:31 [logger.py:42] Received request cmpl-b14e5177dd4a4f46a7d338ce24b0b4ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:31 [async_llm.py:261] Added request cmpl-b14e5177dd4a4f46a7d338ce24b0b4ba-0.
INFO 03-02 01:28:32 [logger.py:42] Received request cmpl-a4ca55428e994c66b3f035e56c52d5a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:32 [async_llm.py:261] Added request cmpl-a4ca55428e994c66b3f035e56c52d5a9-0.
INFO 03-02 01:28:34 [logger.py:42] Received request cmpl-2be07ced01894effbd880b14ca114591-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:34 [async_llm.py:261] Added request cmpl-2be07ced01894effbd880b14ca114591-0.
INFO 03-02 01:28:35 [logger.py:42] Received request cmpl-d524710ea7944ef6ac1cae3742e31162-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:35 [async_llm.py:261] Added request cmpl-d524710ea7944ef6ac1cae3742e31162-0.
INFO 03-02 01:28:36 [logger.py:42] Received request cmpl-be87574078554461875ab1273cd872e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:36 [async_llm.py:261] Added request cmpl-be87574078554461875ab1273cd872e7-0.
INFO 03-02 01:28:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:28:37 [logger.py:42] Received request cmpl-df29811b98e649ac911fde3a165ca717-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:37 [async_llm.py:261] Added request cmpl-df29811b98e649ac911fde3a165ca717-0.
INFO 03-02 01:28:38 [logger.py:42] Received request cmpl-d2ca59d04dfe44dc82db817f441f4a1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:38 [async_llm.py:261] Added request cmpl-d2ca59d04dfe44dc82db817f441f4a1f-0.
INFO 03-02 01:28:39 [logger.py:42] Received request cmpl-9ad62cb279f64bdba235ada72232605b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:39 [async_llm.py:261] Added request cmpl-9ad62cb279f64bdba235ada72232605b-0.
INFO 03-02 01:28:40 [logger.py:42] Received request cmpl-eb207060ae604f1f8f0c2171456b8e25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:40 [async_llm.py:261] Added request cmpl-eb207060ae604f1f8f0c2171456b8e25-0.
INFO 03-02 01:28:41 [logger.py:42] Received request cmpl-a2d06822aa7c402b91dbd09ff23be70c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:41 [async_llm.py:261] Added request cmpl-a2d06822aa7c402b91dbd09ff23be70c-0.
INFO 03-02 01:28:42 [logger.py:42] Received request cmpl-70f228c2b7074ece9f565513a1578520-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:42 [async_llm.py:261] Added request cmpl-70f228c2b7074ece9f565513a1578520-0.
INFO 03-02 01:28:43 [logger.py:42] Received request cmpl-e51e3d32ad3a4082992bd77b38d782d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:43 [async_llm.py:261] Added request cmpl-e51e3d32ad3a4082992bd77b38d782d6-0.
INFO 03-02 01:28:44 [logger.py:42] Received request cmpl-9ad0c15b10c34c89b1850c663ffb0af9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:44 [async_llm.py:261] Added request cmpl-9ad0c15b10c34c89b1850c663ffb0af9-0.
INFO 03-02 01:28:45 [logger.py:42] Received request cmpl-6674c459d73d45e7aa576a6b3a0b52a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:45 [async_llm.py:261] Added request cmpl-6674c459d73d45e7aa576a6b3a0b52a5-0.
INFO 03-02 01:28:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:28:47 [logger.py:42] Received request cmpl-4b5983e0b6084321b9859ceb1f6b0586-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:47 [async_llm.py:261] Added request cmpl-4b5983e0b6084321b9859ceb1f6b0586-0.
INFO 03-02 01:28:48 [logger.py:42] Received request cmpl-4609bc536a254f5bbfebe6195eb6627e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:48 [async_llm.py:261] Added request cmpl-4609bc536a254f5bbfebe6195eb6627e-0.
INFO 03-02 01:28:49 [logger.py:42] Received request cmpl-41c8bb6899844973b0c4a4d08436c3d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:49 [async_llm.py:261] Added request cmpl-41c8bb6899844973b0c4a4d08436c3d6-0.
INFO 03-02 01:28:50 [logger.py:42] Received request cmpl-d499485b355c439f830f5bedfb6f1d74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:50 [async_llm.py:261] Added request cmpl-d499485b355c439f830f5bedfb6f1d74-0.
INFO 03-02 01:28:51 [logger.py:42] Received request cmpl-dd87cf2910cc464b951fbfa016fd65e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:51 [async_llm.py:261] Added request cmpl-dd87cf2910cc464b951fbfa016fd65e6-0.
INFO 03-02 01:28:52 [logger.py:42] Received request cmpl-e133600793474abe943045f36c37738b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:52 [async_llm.py:261] Added request cmpl-e133600793474abe943045f36c37738b-0.
INFO 03-02 01:28:53 [logger.py:42] Received request cmpl-88c77e5f293a4966a4487d99c6af1ebe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:53 [async_llm.py:261] Added request cmpl-88c77e5f293a4966a4487d99c6af1ebe-0.
INFO 03-02 01:28:54 [logger.py:42] Received request cmpl-2a791e053587420eb73fd88d0fb1dfe9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:54 [async_llm.py:261] Added request cmpl-2a791e053587420eb73fd88d0fb1dfe9-0.
INFO 03-02 01:28:55 [logger.py:42] Received request cmpl-1c4179e32547434798416797f5f62f45-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:55 [async_llm.py:261] Added request cmpl-1c4179e32547434798416797f5f62f45-0.
INFO 03-02 01:28:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:28:56 [logger.py:42] Received request cmpl-acb84b8e7fdc44e0b90d1e66efef8328-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:56 [async_llm.py:261] Added request cmpl-acb84b8e7fdc44e0b90d1e66efef8328-0.
INFO 03-02 01:28:57 [logger.py:42] Received request cmpl-c8eb0adfe3f548fbaf130ad135232168-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:57 [async_llm.py:261] Added request cmpl-c8eb0adfe3f548fbaf130ad135232168-0.
INFO 03-02 01:28:58 [logger.py:42] Received request cmpl-e527324e6f3f4df6b6321f56d5fae59b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:58 [async_llm.py:261] Added request cmpl-e527324e6f3f4df6b6321f56d5fae59b-0.
INFO 03-02 01:29:00 [logger.py:42] Received request cmpl-b9e1d524ff60408ebc4a25f6c146a914-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:00 [async_llm.py:261] Added request cmpl-b9e1d524ff60408ebc4a25f6c146a914-0.
INFO 03-02 01:29:01 [logger.py:42] Received request cmpl-65c9665b5e1b4f5691999c33658d94a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:01 [async_llm.py:261] Added request cmpl-65c9665b5e1b4f5691999c33658d94a4-0.
INFO 03-02 01:29:02 [logger.py:42] Received request cmpl-ecdcfb33b0f84cc5a7ac9db793a1c6e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:02 [async_llm.py:261] Added request cmpl-ecdcfb33b0f84cc5a7ac9db793a1c6e2-0.
INFO 03-02 01:29:03 [logger.py:42] Received request cmpl-0c03a60e784c4000bad8883d7b9addee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:03 [async_llm.py:261] Added request cmpl-0c03a60e784c4000bad8883d7b9addee-0.
INFO 03-02 01:29:04 [logger.py:42] Received request cmpl-5aef15d8fdb1463daf7640b1d2e25ce2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:04 [async_llm.py:261] Added request cmpl-5aef15d8fdb1463daf7640b1d2e25ce2-0.
INFO 03-02 01:29:05 [logger.py:42] Received request cmpl-5d5d5a25e2634eb6bb7074050d0d54ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:05 [async_llm.py:261] Added request cmpl-5d5d5a25e2634eb6bb7074050d0d54ed-0.
INFO 03-02 01:29:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:29:06 [logger.py:42] Received request cmpl-cab689f43674447a9c6891f225502324-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:06 [async_llm.py:261] Added request cmpl-cab689f43674447a9c6891f225502324-0.
INFO 03-02 01:29:07 [logger.py:42] Received request cmpl-628454f46c83465091df3b61ba5ac7eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:07 [async_llm.py:261] Added request cmpl-628454f46c83465091df3b61ba5ac7eb-0.
INFO 03-02 01:29:08 [logger.py:42] Received request cmpl-6053bf4608bf42fea98f493d3118f7e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:08 [async_llm.py:261] Added request cmpl-6053bf4608bf42fea98f493d3118f7e3-0.
INFO 03-02 01:29:09 [logger.py:42] Received request cmpl-74ee0ab2712b4a4d999d124b43ed08a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:09 [async_llm.py:261] Added request cmpl-74ee0ab2712b4a4d999d124b43ed08a4-0.
INFO 03-02 01:29:10 [logger.py:42] Received request cmpl-d2821d614327415db5520c38aee33a8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:10 [async_llm.py:261] Added request cmpl-d2821d614327415db5520c38aee33a8b-0.
INFO 03-02 01:29:11 [logger.py:42] Received request cmpl-84b71dd7f76f4ac8b2747258649e5f9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:11 [async_llm.py:261] Added request cmpl-84b71dd7f76f4ac8b2747258649e5f9d-0.
INFO 03-02 01:29:13 [logger.py:42] Received request cmpl-ce4ae34708a74d40b9a383666b832736-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:13 [async_llm.py:261] Added request cmpl-ce4ae34708a74d40b9a383666b832736-0.
INFO 03-02 01:29:14 [logger.py:42] Received request cmpl-40ce6c91ba244c2a9c05486b6157d9e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:14 [async_llm.py:261] Added request cmpl-40ce6c91ba244c2a9c05486b6157d9e6-0.
INFO 03-02 01:29:15 [logger.py:42] Received request cmpl-9b668b85e48e409fb255dce4a4d0681e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:15 [async_llm.py:261] Added request cmpl-9b668b85e48e409fb255dce4a4d0681e-0.
INFO 03-02 01:29:16 [logger.py:42] Received request cmpl-a0d026d7c28f46658db062b2a74a174c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:16 [async_llm.py:261] Added request cmpl-a0d026d7c28f46658db062b2a74a174c-0.
INFO 03-02 01:29:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:29:17 [logger.py:42] Received request cmpl-f00840ae414d4999976493d1cbc05d60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:17 [async_llm.py:261] Added request cmpl-f00840ae414d4999976493d1cbc05d60-0.
INFO 03-02 01:29:18 [logger.py:42] Received request cmpl-06dc78fb0e4d41b384be13c070b06fff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:18 [async_llm.py:261] Added request cmpl-06dc78fb0e4d41b384be13c070b06fff-0.
INFO 03-02 01:29:19 [logger.py:42] Received request cmpl-73ebd4f8c32d468d9e3d32295a69b402-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:19 [async_llm.py:261] Added request cmpl-73ebd4f8c32d468d9e3d32295a69b402-0.
INFO 03-02 01:29:20 [logger.py:42] Received request cmpl-d0c5197e7d2d4b20b28db1db2869e502-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:20 [async_llm.py:261] Added request cmpl-d0c5197e7d2d4b20b28db1db2869e502-0.
INFO 03-02 01:29:21 [logger.py:42] Received request cmpl-99f8354d6dac4298aaf2e5a6d7783f19-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:21 [async_llm.py:261] Added request cmpl-99f8354d6dac4298aaf2e5a6d7783f19-0.
INFO 03-02 01:29:22 [logger.py:42] Received request cmpl-782dfb8117f14df997602657ec43eb08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:22 [async_llm.py:261] Added request cmpl-782dfb8117f14df997602657ec43eb08-0.
INFO 03-02 01:29:23 [logger.py:42] Received request cmpl-0ae359124b2f417a83bace03f6db4663-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:23 [async_llm.py:261] Added request cmpl-0ae359124b2f417a83bace03f6db4663-0.
INFO 03-02 01:29:25 [logger.py:42] Received request cmpl-3efe51121035484dac36e3823f5a23b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:25 [async_llm.py:261] Added request cmpl-3efe51121035484dac36e3823f5a23b3-0.
INFO 03-02 01:29:26 [logger.py:42] Received request cmpl-43dbc8fd99a64f83b90cc1a947c2e925-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:26 [async_llm.py:261] Added request cmpl-43dbc8fd99a64f83b90cc1a947c2e925-0.
INFO 03-02 01:29:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:29:27 [logger.py:42] Received request cmpl-4cc0ba62e32d4e549e39db91e5298eef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:27 [async_llm.py:261] Added request cmpl-4cc0ba62e32d4e549e39db91e5298eef-0.
INFO 03-02 01:29:28 [logger.py:42] Received request cmpl-c044fd99ee1a4b6b997d39944669e6c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:28 [async_llm.py:261] Added request cmpl-c044fd99ee1a4b6b997d39944669e6c4-0.
INFO 03-02 01:29:29 [logger.py:42] Received request cmpl-560ecfd33f434f88b2b423e8abfe2a11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:29 [async_llm.py:261] Added request cmpl-560ecfd33f434f88b2b423e8abfe2a11-0.
INFO 03-02 01:29:30 [logger.py:42] Received request cmpl-788189d6bce7443b8c5f8c8d2a2d6104-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:30 [async_llm.py:261] Added request cmpl-788189d6bce7443b8c5f8c8d2a2d6104-0.
INFO 03-02 01:29:31 [logger.py:42] Received request cmpl-6d8f965c32a34ba1843a92f2b9aa7956-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:31 [async_llm.py:261] Added request cmpl-6d8f965c32a34ba1843a92f2b9aa7956-0.
INFO 03-02 01:29:32 [logger.py:42] Received request cmpl-5f8c91f245ab46178e54da93a9708904-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:32 [async_llm.py:261] Added request cmpl-5f8c91f245ab46178e54da93a9708904-0.
INFO 03-02 01:29:33 [logger.py:42] Received request cmpl-7d13a7e79fd945c78ab9cf2d366c3a12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:33 [async_llm.py:261] Added request cmpl-7d13a7e79fd945c78ab9cf2d366c3a12-0.
INFO 03-02 01:29:34 [logger.py:42] Received request cmpl-f2af162f2b33457fa3eeab5df861b2ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:34 [async_llm.py:261] Added request cmpl-f2af162f2b33457fa3eeab5df861b2ac-0.
INFO 03-02 01:29:35 [logger.py:42] Received request cmpl-e631a6d007cd45b59961f22aed135aae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:35 [async_llm.py:261] Added request cmpl-e631a6d007cd45b59961f22aed135aae-0.
INFO 03-02 01:29:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:29:36 [logger.py:42] Received request cmpl-7cec9ee1cf504387bd3e9af7e74d07b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:36 [async_llm.py:261] Added request cmpl-7cec9ee1cf504387bd3e9af7e74d07b0-0.
INFO 03-02 01:29:38 [logger.py:42] Received request cmpl-991e505cff77428cae29172e4d92be6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:38 [async_llm.py:261] Added request cmpl-991e505cff77428cae29172e4d92be6e-0.
INFO 03-02 01:29:39 [logger.py:42] Received request cmpl-16400206f8f746738bb56ea5864ae07c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:39 [async_llm.py:261] Added request cmpl-16400206f8f746738bb56ea5864ae07c-0.
INFO 03-02 01:29:40 [logger.py:42] Received request cmpl-b92b24154cc740e296cdb2cc9d5d9648-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:40 [async_llm.py:261] Added request cmpl-b92b24154cc740e296cdb2cc9d5d9648-0.
INFO 03-02 01:29:41 [logger.py:42] Received request cmpl-87d5808033324ce5bcc8a1819cf1d395-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:41 [async_llm.py:261] Added request cmpl-87d5808033324ce5bcc8a1819cf1d395-0.
INFO 03-02 01:29:42 [logger.py:42] Received request cmpl-c2d5da395d8f4a8a8b1c4fec390d39df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:42 [async_llm.py:261] Added request cmpl-c2d5da395d8f4a8a8b1c4fec390d39df-0.
INFO 03-02 01:29:43 [logger.py:42] Received request cmpl-1d435a2959c64366851b900e49f76859-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:43 [async_llm.py:261] Added request cmpl-1d435a2959c64366851b900e49f76859-0.
INFO 03-02 01:29:44 [logger.py:42] Received request cmpl-5a7821b373ee4d4e83469bc0696158f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:44 [async_llm.py:261] Added request cmpl-5a7821b373ee4d4e83469bc0696158f1-0.
INFO 03-02 01:29:45 [logger.py:42] Received request cmpl-b5e66b3cd0024941af22c0ca18204cf7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:45 [async_llm.py:261] Added request cmpl-b5e66b3cd0024941af22c0ca18204cf7-0.
INFO 03-02 01:29:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:29:46 [logger.py:42] Received request cmpl-6ca277b0d9b64e8386d24ec4927ce639-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:46 [async_llm.py:261] Added request cmpl-6ca277b0d9b64e8386d24ec4927ce639-0.
INFO 03-02 01:29:47 [logger.py:42] Received request cmpl-2bcae624cff24267b535c8c4c85ecced-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:47 [async_llm.py:261] Added request cmpl-2bcae624cff24267b535c8c4c85ecced-0.
INFO 03-02 01:29:48 [logger.py:42] Received request cmpl-ba3b33901d7947af9b707aa4ae546e88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:48 [async_llm.py:261] Added request cmpl-ba3b33901d7947af9b707aa4ae546e88-0.
INFO 03-02 01:29:49 [logger.py:42] Received request cmpl-0ccfbea9f0134ed593016beb6e7a8873-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:49 [async_llm.py:261] Added request cmpl-0ccfbea9f0134ed593016beb6e7a8873-0.
INFO 03-02 01:29:51 [logger.py:42] Received request cmpl-4aceb0e7786c4793803e12dedfffd4b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:51 [async_llm.py:261] Added request cmpl-4aceb0e7786c4793803e12dedfffd4b6-0.
INFO 03-02 01:29:52 [logger.py:42] Received request cmpl-95bc0e72c750464aafac6e3b62cc8b44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:52 [async_llm.py:261] Added request cmpl-95bc0e72c750464aafac6e3b62cc8b44-0.
INFO 03-02 01:29:53 [logger.py:42] Received request cmpl-c811c4f9892d4b52a0dbb48c2198a17e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:53 [async_llm.py:261] Added request cmpl-c811c4f9892d4b52a0dbb48c2198a17e-0.
INFO 03-02 01:29:54 [logger.py:42] Received request cmpl-65b11e6211c74edc8bb190e62bf23d0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:54 [async_llm.py:261] Added request cmpl-65b11e6211c74edc8bb190e62bf23d0a-0.
INFO 03-02 01:29:55 [logger.py:42] Received request cmpl-09ecd0cdbf784431b2ebe67dfe305ca8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:55 [async_llm.py:261] Added request cmpl-09ecd0cdbf784431b2ebe67dfe305ca8-0.
INFO 03-02 01:29:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:29:56 [logger.py:42] Received request cmpl-89e93393147c4705b5c3cd967095241e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:56 [async_llm.py:261] Added request cmpl-89e93393147c4705b5c3cd967095241e-0.
INFO 03-02 01:29:57 [logger.py:42] Received request cmpl-f946ec722cbf4f04a7c2c4811909f9ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:57 [async_llm.py:261] Added request cmpl-f946ec722cbf4f04a7c2c4811909f9ef-0.
INFO 03-02 01:29:58 [logger.py:42] Received request cmpl-65cdfdb1291d49968dbfb694dddb0794-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:58 [async_llm.py:261] Added request cmpl-65cdfdb1291d49968dbfb694dddb0794-0.
INFO 03-02 01:29:59 [logger.py:42] Received request cmpl-ad06bf6bc8f54d44b16c91522327040f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:59 [async_llm.py:261] Added request cmpl-ad06bf6bc8f54d44b16c91522327040f-0.
INFO 03-02 01:30:00 [logger.py:42] Received request cmpl-ca4fb65b438340e99f4a233b958d45b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:00 [async_llm.py:261] Added request cmpl-ca4fb65b438340e99f4a233b958d45b1-0.
INFO 03-02 01:30:01 [logger.py:42] Received request cmpl-3c30d17fa3e4483ebce60154dfd7b0e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:01 [async_llm.py:261] Added request cmpl-3c30d17fa3e4483ebce60154dfd7b0e0-0.
INFO 03-02 01:30:02 [logger.py:42] Received request cmpl-5e5a7a64ffce4d158088fd18f1c98713-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:02 [async_llm.py:261] Added request cmpl-5e5a7a64ffce4d158088fd18f1c98713-0.
INFO 03-02 01:30:04 [logger.py:42] Received request cmpl-580e62636c874fa883d5ccf2c73107dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:04 [async_llm.py:261] Added request cmpl-580e62636c874fa883d5ccf2c73107dc-0.
INFO 03-02 01:30:05 [logger.py:42] Received request cmpl-64c1e1fdb5fe4061a1c65ba94d6816e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:05 [async_llm.py:261] Added request cmpl-64c1e1fdb5fe4061a1c65ba94d6816e5-0.
INFO 03-02 01:30:06 [logger.py:42] Received request cmpl-6ab1851027314a8e998b56c22617fc63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:06 [async_llm.py:261] Added request cmpl-6ab1851027314a8e998b56c22617fc63-0.
INFO 03-02 01:30:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:30:07 [logger.py:42] Received request cmpl-7a644ebdcb324ad59d39f8ff0c0ff80c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:07 [async_llm.py:261] Added request cmpl-7a644ebdcb324ad59d39f8ff0c0ff80c-0.
INFO 03-02 01:30:08 [logger.py:42] Received request cmpl-48fddf79579248529fffe9b99f14fa32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:08 [async_llm.py:261] Added request cmpl-48fddf79579248529fffe9b99f14fa32-0.
INFO 03-02 01:30:09 [logger.py:42] Received request cmpl-1bd97d6af0c143f0a793fc6e55cceaf7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:09 [async_llm.py:261] Added request cmpl-1bd97d6af0c143f0a793fc6e55cceaf7-0.
INFO 03-02 01:30:10 [logger.py:42] Received request cmpl-c46f294c999d412a940f15674be32f44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:10 [async_llm.py:261] Added request cmpl-c46f294c999d412a940f15674be32f44-0.
INFO 03-02 01:30:11 [logger.py:42] Received request cmpl-d227c282e5f849c38f077660bd3930ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:11 [async_llm.py:261] Added request cmpl-d227c282e5f849c38f077660bd3930ed-0.
INFO 03-02 01:30:12 [logger.py:42] Received request cmpl-f3e0cc61252243ce94f90ca94b0d2ef1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:12 [async_llm.py:261] Added request cmpl-f3e0cc61252243ce94f90ca94b0d2ef1-0.
INFO 03-02 01:30:13 [logger.py:42] Received request cmpl-855f049d67ad4252aebdb045a055b0bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:13 [async_llm.py:261] Added request cmpl-855f049d67ad4252aebdb045a055b0bb-0.
INFO 03-02 01:30:14 [logger.py:42] Received request cmpl-26c5e23a3cd540b4a8616b7fb79ecd73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:14 [async_llm.py:261] Added request cmpl-26c5e23a3cd540b4a8616b7fb79ecd73-0.
INFO 03-02 01:30:15 [logger.py:42] Received request cmpl-05f54d56f470405fa5a910cd3d1afa27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:15 [async_llm.py:261] Added request cmpl-05f54d56f470405fa5a910cd3d1afa27-0.
INFO 03-02 01:30:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:30:17 [logger.py:42] Received request cmpl-108868c857b242e68cca86eacfcdddd7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:17 [async_llm.py:261] Added request cmpl-108868c857b242e68cca86eacfcdddd7-0.
INFO 03-02 01:30:18 [logger.py:42] Received request cmpl-255763fb33d64ec69671b0d5562ba811-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:18 [async_llm.py:261] Added request cmpl-255763fb33d64ec69671b0d5562ba811-0.
INFO 03-02 01:30:19 [logger.py:42] Received request cmpl-92a51e58865841cbb82b578e1cb42106-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:19 [async_llm.py:261] Added request cmpl-92a51e58865841cbb82b578e1cb42106-0.
INFO 03-02 01:30:20 [logger.py:42] Received request cmpl-ba8d6f6b8a3a4d9a9b116bd9cef79063-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:20 [async_llm.py:261] Added request cmpl-ba8d6f6b8a3a4d9a9b116bd9cef79063-0.
INFO 03-02 01:30:21 [logger.py:42] Received request cmpl-9e35e6d0c23d425ea93968b73710d539-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:21 [async_llm.py:261] Added request cmpl-9e35e6d0c23d425ea93968b73710d539-0.
INFO 03-02 01:30:22 [logger.py:42] Received request cmpl-cc5c385c8c68446ea3ed06b0c62cc9ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:22 [async_llm.py:261] Added request cmpl-cc5c385c8c68446ea3ed06b0c62cc9ce-0.
INFO 03-02 01:30:23 [logger.py:42] Received request cmpl-aa31acca24ce4308aad0b68c4f481c03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:23 [async_llm.py:261] Added request cmpl-aa31acca24ce4308aad0b68c4f481c03-0.
INFO 03-02 01:30:24 [logger.py:42] Received request cmpl-5a1b2161453e4dce99f0cf7e46e56524-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:24 [async_llm.py:261] Added request cmpl-5a1b2161453e4dce99f0cf7e46e56524-0.
INFO 03-02 01:30:25 [logger.py:42] Received request cmpl-9bd6dea027a9453aaeecaeb416a9eddc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:25 [async_llm.py:261] Added request cmpl-9bd6dea027a9453aaeecaeb416a9eddc-0.
INFO 03-02 01:30:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:30:26 [logger.py:42] Received request cmpl-3a88c90e6d504386a43ced6a6cff0a3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:26 [async_llm.py:261] Added request cmpl-3a88c90e6d504386a43ced6a6cff0a3a-0.
INFO 03-02 01:30:27 [logger.py:42] Received request cmpl-aad134fb6a674dc0830733cac7514479-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:27 [async_llm.py:261] Added request cmpl-aad134fb6a674dc0830733cac7514479-0.
INFO 03-02 01:30:28 [logger.py:42] Received request cmpl-ece085968bf747d3bda29ef87ae2ca34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:28 [async_llm.py:261] Added request cmpl-ece085968bf747d3bda29ef87ae2ca34-0.
INFO 03-02 01:30:30 [logger.py:42] Received request cmpl-2504535936424616923f8ab60c48596d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:30 [async_llm.py:261] Added request cmpl-2504535936424616923f8ab60c48596d-0.
INFO 03-02 01:30:31 [logger.py:42] Received request cmpl-9af954a1597c4e249abbbfc71dc24bf6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:31 [async_llm.py:261] Added request cmpl-9af954a1597c4e249abbbfc71dc24bf6-0.
INFO 03-02 01:30:32 [logger.py:42] Received request cmpl-4dae2d78f07c49a4a3c0be24601024ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:32 [async_llm.py:261] Added request cmpl-4dae2d78f07c49a4a3c0be24601024ef-0.
INFO 03-02 01:30:33 [logger.py:42] Received request cmpl-5716e3082f4f4761a3d6a8aceb7ebf1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:33 [async_llm.py:261] Added request cmpl-5716e3082f4f4761a3d6a8aceb7ebf1c-0.
INFO 03-02 01:30:34 [logger.py:42] Received request cmpl-da1a10538e1548b3848287c3436a5deb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:34 [async_llm.py:261] Added request cmpl-da1a10538e1548b3848287c3436a5deb-0.
INFO 03-02 01:30:35 [logger.py:42] Received request cmpl-fe53889c93b74e25b477d6ff2c8182b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:35 [async_llm.py:261] Added request cmpl-fe53889c93b74e25b477d6ff2c8182b1-0.
INFO 03-02 01:30:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:30:36 [logger.py:42] Received request cmpl-dd15e9042a7345b6adf0d3bdbfeed029-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:36 [async_llm.py:261] Added request cmpl-dd15e9042a7345b6adf0d3bdbfeed029-0.
INFO 03-02 01:30:37 [logger.py:42] Received request cmpl-6ec6ccf27b0142d3b59b874e7d688cda-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:37 [async_llm.py:261] Added request cmpl-6ec6ccf27b0142d3b59b874e7d688cda-0.
INFO 03-02 01:30:38 [logger.py:42] Received request cmpl-3e17ed7dbf9d496d81221739c8f909a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:38 [async_llm.py:261] Added request cmpl-3e17ed7dbf9d496d81221739c8f909a2-0.
INFO 03-02 01:30:39 [logger.py:42] Received request cmpl-c6c692dc8fb54bf29a581c31ce9b0838-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:39 [async_llm.py:261] Added request cmpl-c6c692dc8fb54bf29a581c31ce9b0838-0.
INFO 03-02 01:30:40 [logger.py:42] Received request cmpl-0fcd6c85faaf448288882c856489b155-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:40 [async_llm.py:261] Added request cmpl-0fcd6c85faaf448288882c856489b155-0.
INFO 03-02 01:30:42 [logger.py:42] Received request cmpl-f1ace7149000416ea4f4f55fe9aff107-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:42 [async_llm.py:261] Added request cmpl-f1ace7149000416ea4f4f55fe9aff107-0.
INFO 03-02 01:30:43 [logger.py:42] Received request cmpl-d2fae6829da54125931c135e23f663b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:43 [async_llm.py:261] Added request cmpl-d2fae6829da54125931c135e23f663b8-0.
INFO 03-02 01:30:44 [logger.py:42] Received request cmpl-fda5ff29a9024700b9c25c0c7a0a1d62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:44 [async_llm.py:261] Added request cmpl-fda5ff29a9024700b9c25c0c7a0a1d62-0.
INFO 03-02 01:30:45 [logger.py:42] Received request cmpl-3a64e0128ba542d1b96e1c8c8788ab8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:45 [async_llm.py:261] Added request cmpl-3a64e0128ba542d1b96e1c8c8788ab8d-0.
INFO 03-02 01:30:46 [logger.py:42] Received request cmpl-21a43413015d4adc897368ee24baef27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:46 [async_llm.py:261] Added request cmpl-21a43413015d4adc897368ee24baef27-0.
INFO 03-02 01:30:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:30:47 [logger.py:42] Received request cmpl-e8b37c6359214401885a76db2337b5a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:47 [async_llm.py:261] Added request cmpl-e8b37c6359214401885a76db2337b5a9-0.
INFO 03-02 01:30:48 [logger.py:42] Received request cmpl-41eb2a8047b44908b6a2f8c37ac14f23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:48 [async_llm.py:261] Added request cmpl-41eb2a8047b44908b6a2f8c37ac14f23-0.
INFO 03-02 01:30:49 [logger.py:42] Received request cmpl-c44ee4d1eb534602b4b1311fec9433af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:49 [async_llm.py:261] Added request cmpl-c44ee4d1eb534602b4b1311fec9433af-0.
INFO 03-02 01:30:50 [logger.py:42] Received request cmpl-ab856d59cb034f569f191c30e10cc3a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:50 [async_llm.py:261] Added request cmpl-ab856d59cb034f569f191c30e10cc3a8-0.
INFO 03-02 01:30:51 [logger.py:42] Received request cmpl-5fe5e904df8a4bae8af99cc86f238541-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:51 [async_llm.py:261] Added request cmpl-5fe5e904df8a4bae8af99cc86f238541-0.
INFO 03-02 01:30:52 [logger.py:42] Received request cmpl-4914c98b01bc4f0fa141c6e7f46bdec3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:52 [async_llm.py:261] Added request cmpl-4914c98b01bc4f0fa141c6e7f46bdec3-0.
INFO 03-02 01:30:53 [logger.py:42] Received request cmpl-4cfb450930de4da58bc5f0bc5a01b2b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:53 [async_llm.py:261] Added request cmpl-4cfb450930de4da58bc5f0bc5a01b2b7-0.
INFO 03-02 01:30:55 [logger.py:42] Received request cmpl-f7ee9759561245bc8eee03b84ff156cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:55 [async_llm.py:261] Added request cmpl-f7ee9759561245bc8eee03b84ff156cb-0.
INFO 03-02 01:30:56 [logger.py:42] Received request cmpl-3ffa84fbc6134da0ad87db0429196202-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:56 [async_llm.py:261] Added request cmpl-3ffa84fbc6134da0ad87db0429196202-0.
INFO 03-02 01:30:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:30:57 [logger.py:42] Received request cmpl-233d1ca181bc448bb6903cc1afdd32e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:57 [async_llm.py:261] Added request cmpl-233d1ca181bc448bb6903cc1afdd32e3-0.
INFO 03-02 01:30:58 [logger.py:42] Received request cmpl-b2ad4f343e4d4cfaac188eec0e71f9b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:58 [async_llm.py:261] Added request cmpl-b2ad4f343e4d4cfaac188eec0e71f9b5-0.
INFO 03-02 01:30:59 [logger.py:42] Received request cmpl-d52f0dc52cb7436d8590fddbe9f253ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:59 [async_llm.py:261] Added request cmpl-d52f0dc52cb7436d8590fddbe9f253ad-0.
INFO 03-02 01:31:00 [logger.py:42] Received request cmpl-33267470211647b9b28bdea1d07f77ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:00 [async_llm.py:261] Added request cmpl-33267470211647b9b28bdea1d07f77ae-0.
INFO 03-02 01:31:01 [logger.py:42] Received request cmpl-250e2ea6a8264a29aed34c6f66573c99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:01 [async_llm.py:261] Added request cmpl-250e2ea6a8264a29aed34c6f66573c99-0.
INFO 03-02 01:31:02 [logger.py:42] Received request cmpl-ce160c8de1b940618e2127158ffd7eba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:02 [async_llm.py:261] Added request cmpl-ce160c8de1b940618e2127158ffd7eba-0.
INFO 03-02 01:31:03 [logger.py:42] Received request cmpl-cfd7a28a39b7432e942a32b84c6defd4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:03 [async_llm.py:261] Added request cmpl-cfd7a28a39b7432e942a32b84c6defd4-0.
INFO 03-02 01:31:04 [logger.py:42] Received request cmpl-3e1189b384414df7aaf08c517831cc89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:04 [async_llm.py:261] Added request cmpl-3e1189b384414df7aaf08c517831cc89-0.
INFO 03-02 01:31:05 [logger.py:42] Received request cmpl-8080c5d8ca3e4076b24cc721cd967eaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:05 [async_llm.py:261] Added request cmpl-8080c5d8ca3e4076b24cc721cd967eaf-0.
INFO 03-02 01:31:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:31:06 [logger.py:42] Received request cmpl-9b28b3c5b29740bfbde5a2ae6f7ef8be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:06 [async_llm.py:261] Added request cmpl-9b28b3c5b29740bfbde5a2ae6f7ef8be-0.
INFO 03-02 01:31:08 [logger.py:42] Received request cmpl-ac91624c6e464858a7358be38bd524ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:08 [async_llm.py:261] Added request cmpl-ac91624c6e464858a7358be38bd524ea-0.
INFO 03-02 01:31:09 [logger.py:42] Received request cmpl-06cff4c21dc1401fa89cf6f8a3a916c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:09 [async_llm.py:261] Added request cmpl-06cff4c21dc1401fa89cf6f8a3a916c9-0.
INFO 03-02 01:31:10 [logger.py:42] Received request cmpl-cbfb16d4730f4c6694995c9e18499581-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:10 [async_llm.py:261] Added request cmpl-cbfb16d4730f4c6694995c9e18499581-0.
INFO 03-02 01:31:11 [logger.py:42] Received request cmpl-85392c903a8248529b9f6b15f059a302-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:11 [async_llm.py:261] Added request cmpl-85392c903a8248529b9f6b15f059a302-0.
INFO 03-02 01:31:12 [logger.py:42] Received request cmpl-fa142f23ce0c4e4eb9327e1cade36e31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:12 [async_llm.py:261] Added request cmpl-fa142f23ce0c4e4eb9327e1cade36e31-0.
INFO 03-02 01:31:13 [logger.py:42] Received request cmpl-d63578c44a2c4e588578382354f6f25f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:13 [async_llm.py:261] Added request cmpl-d63578c44a2c4e588578382354f6f25f-0.
INFO 03-02 01:31:14 [logger.py:42] Received request cmpl-e82591d0908a4a288753436f8b870977-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:14 [async_llm.py:261] Added request cmpl-e82591d0908a4a288753436f8b870977-0.
INFO 03-02 01:31:15 [logger.py:42] Received request cmpl-7d941c9ede4f46198ee687d3c59260aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:15 [async_llm.py:261] Added request cmpl-7d941c9ede4f46198ee687d3c59260aa-0.
INFO 03-02 01:31:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:31:16 [logger.py:42] Received request cmpl-39de822a1cfa4b39ab4d13702dc3c2e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:16 [async_llm.py:261] Added request cmpl-39de822a1cfa4b39ab4d13702dc3c2e9-0.
INFO 03-02 01:31:17 [logger.py:42] Received request cmpl-b4d0514c726f4dfc9cef5a961b5b078f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:17 [async_llm.py:261] Added request cmpl-b4d0514c726f4dfc9cef5a961b5b078f-0.
INFO 03-02 01:31:18 [logger.py:42] Received request cmpl-50da7c3424024e5aacb5a8e943822e8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:18 [async_llm.py:261] Added request cmpl-50da7c3424024e5aacb5a8e943822e8c-0.
INFO 03-02 01:31:19 [logger.py:42] Received request cmpl-912a663d5f6146568567323940e5bbb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:19 [async_llm.py:261] Added request cmpl-912a663d5f6146568567323940e5bbb2-0.
INFO 03-02 01:31:21 [logger.py:42] Received request cmpl-96ce780c31d5462b97256a403125212c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:21 [async_llm.py:261] Added request cmpl-96ce780c31d5462b97256a403125212c-0.
INFO 03-02 01:31:22 [logger.py:42] Received request cmpl-4c493a001e50430683fae48c774d793f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:22 [async_llm.py:261] Added request cmpl-4c493a001e50430683fae48c774d793f-0.
INFO 03-02 01:31:23 [logger.py:42] Received request cmpl-e16f3f36b1da405da9432090233e593d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:23 [async_llm.py:261] Added request cmpl-e16f3f36b1da405da9432090233e593d-0.
INFO 03-02 01:31:24 [logger.py:42] Received request cmpl-6ab9a2b4a0c9481f8451a59134f940ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:24 [async_llm.py:261] Added request cmpl-6ab9a2b4a0c9481f8451a59134f940ef-0.
INFO 03-02 01:31:25 [logger.py:42] Received request cmpl-1199926ea4614e079bcff4da468b872a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:25 [async_llm.py:261] Added request cmpl-1199926ea4614e079bcff4da468b872a-0.
INFO 03-02 01:31:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:31:26 [logger.py:42] Received request cmpl-218b690919f942e5ad90f0598eac87f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:26 [async_llm.py:261] Added request cmpl-218b690919f942e5ad90f0598eac87f6-0.
INFO 03-02 01:31:27 [logger.py:42] Received request cmpl-51f0e236346d4d16a76d3c050ef89e31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:27 [async_llm.py:261] Added request cmpl-51f0e236346d4d16a76d3c050ef89e31-0.
INFO 03-02 01:31:28 [logger.py:42] Received request cmpl-12e24cd667b54469b885bdf10eaee886-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:28 [async_llm.py:261] Added request cmpl-12e24cd667b54469b885bdf10eaee886-0.
INFO 03-02 01:31:29 [logger.py:42] Received request cmpl-8219e033cb9444b8a61d8ecf514c543c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:29 [async_llm.py:261] Added request cmpl-8219e033cb9444b8a61d8ecf514c543c-0.
INFO 03-02 01:31:30 [logger.py:42] Received request cmpl-b5fcbacfe6534f7db20d53253697edf6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:30 [async_llm.py:261] Added request cmpl-b5fcbacfe6534f7db20d53253697edf6-0.
INFO 03-02 01:31:31 [logger.py:42] Received request cmpl-e36dc8248df74b4383038b9ddd1bc17b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:31 [async_llm.py:261] Added request cmpl-e36dc8248df74b4383038b9ddd1bc17b-0.
INFO 03-02 01:31:32 [logger.py:42] Received request cmpl-4c66f8f98085442f802c97148dbe7f4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:32 [async_llm.py:261] Added request cmpl-4c66f8f98085442f802c97148dbe7f4e-0.
INFO 03-02 01:31:34 [logger.py:42] Received request cmpl-35dcd418ef2b4fb8a8c93fa913ad7427-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:34 [async_llm.py:261] Added request cmpl-35dcd418ef2b4fb8a8c93fa913ad7427-0.
INFO 03-02 01:31:35 [logger.py:42] Received request cmpl-64c5391bf9a9479ea93aa97b2213ddee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:35 [async_llm.py:261] Added request cmpl-64c5391bf9a9479ea93aa97b2213ddee-0.
INFO 03-02 01:31:36 [logger.py:42] Received request cmpl-17dc85d9fb454daca6659cb1724a2947-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:36 [async_llm.py:261] Added request cmpl-17dc85d9fb454daca6659cb1724a2947-0.
INFO 03-02 01:31:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:31:37 [logger.py:42] Received request cmpl-860992a42349482fb7ddf437ffbab07b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:37 [async_llm.py:261] Added request cmpl-860992a42349482fb7ddf437ffbab07b-0.
INFO 03-02 01:31:38 [logger.py:42] Received request cmpl-216e6088784d42dc8bdcbc7e5b68d6c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:38 [async_llm.py:261] Added request cmpl-216e6088784d42dc8bdcbc7e5b68d6c2-0.
INFO 03-02 01:31:39 [logger.py:42] Received request cmpl-b05c5c1dcdcf4c44be2a4e0a32b5efec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:39 [async_llm.py:261] Added request cmpl-b05c5c1dcdcf4c44be2a4e0a32b5efec-0.
INFO 03-02 01:31:40 [logger.py:42] Received request cmpl-0e35fc8b41714e488e32384d0b0e28c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:40 [async_llm.py:261] Added request cmpl-0e35fc8b41714e488e32384d0b0e28c5-0.
INFO 03-02 01:31:41 [logger.py:42] Received request cmpl-f3b55bae0f2345dda2b35f2ebff2aff3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:41 [async_llm.py:261] Added request cmpl-f3b55bae0f2345dda2b35f2ebff2aff3-0.
INFO 03-02 01:31:42 [logger.py:42] Received request cmpl-9ff1b1819c0e4af4820a52387a5bfc72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:42 [async_llm.py:261] Added request cmpl-9ff1b1819c0e4af4820a52387a5bfc72-0.
INFO 03-02 01:31:43 [logger.py:42] Received request cmpl-ecebe60fb94e41fd8ede5152755f2d32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:43 [async_llm.py:261] Added request cmpl-ecebe60fb94e41fd8ede5152755f2d32-0.
INFO 03-02 01:31:44 [logger.py:42] Received request cmpl-0033f7c7a8a64a138191d7f2ca1e1b16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:44 [async_llm.py:261] Added request cmpl-0033f7c7a8a64a138191d7f2ca1e1b16-0.
INFO 03-02 01:31:45 [logger.py:42] Received request cmpl-5fbdcab12ead48c893f820d81b817ec7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:45 [async_llm.py:261] Added request cmpl-5fbdcab12ead48c893f820d81b817ec7-0.
INFO 03-02 01:31:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:31:47 [logger.py:42] Received request cmpl-9e1e1c6642b14b0ea71dc072bf0aed5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:47 [async_llm.py:261] Added request cmpl-9e1e1c6642b14b0ea71dc072bf0aed5f-0.
INFO 03-02 01:31:48 [logger.py:42] Received request cmpl-7c768e2614f94be28ccd22bbcea025a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:48 [async_llm.py:261] Added request cmpl-7c768e2614f94be28ccd22bbcea025a8-0.
INFO 03-02 01:31:49 [logger.py:42] Received request cmpl-1f466d686c9d431c9ad6bd6a36779d07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:49 [async_llm.py:261] Added request cmpl-1f466d686c9d431c9ad6bd6a36779d07-0.
INFO 03-02 01:31:50 [logger.py:42] Received request cmpl-634bd9a8bfba46029795cc7ed13533d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:50 [async_llm.py:261] Added request cmpl-634bd9a8bfba46029795cc7ed13533d7-0.
INFO 03-02 01:31:51 [logger.py:42] Received request cmpl-44327c833640415985ed304239e46443-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:51 [async_llm.py:261] Added request cmpl-44327c833640415985ed304239e46443-0.
INFO 03-02 01:31:52 [logger.py:42] Received request cmpl-7ba9399f6d1b4c71959c8791aa1fc839-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:52 [async_llm.py:261] Added request cmpl-7ba9399f6d1b4c71959c8791aa1fc839-0.
INFO 03-02 01:31:53 [logger.py:42] Received request cmpl-077c91d002324c0689dbce67af76c8fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:53 [async_llm.py:261] Added request cmpl-077c91d002324c0689dbce67af76c8fd-0.
INFO 03-02 01:31:54 [logger.py:42] Received request cmpl-02f1b59fc7dc4e4c84660cd1190a9405-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:54 [async_llm.py:261] Added request cmpl-02f1b59fc7dc4e4c84660cd1190a9405-0.
INFO 03-02 01:31:55 [logger.py:42] Received request cmpl-097853e28cf748b98c6cf62c0b103ba8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:55 [async_llm.py:261] Added request cmpl-097853e28cf748b98c6cf62c0b103ba8-0.
INFO 03-02 01:31:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:31:56 [logger.py:42] Received request cmpl-cfae92c603e244dfb664ec4b765a8da0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:56 [async_llm.py:261] Added request cmpl-cfae92c603e244dfb664ec4b765a8da0-0.
INFO 03-02 01:31:57 [logger.py:42] Received request cmpl-dde31cd2d04f4cc1b2c6ea18e0e3526e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:57 [async_llm.py:261] Added request cmpl-dde31cd2d04f4cc1b2c6ea18e0e3526e-0.
INFO 03-02 01:31:58 [logger.py:42] Received request cmpl-76e9479948a941adbca06dbd73f8faa4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:58 [async_llm.py:261] Added request cmpl-76e9479948a941adbca06dbd73f8faa4-0.
INFO 03-02 01:32:00 [logger.py:42] Received request cmpl-01bcfa116e5d429b82a3e24132df1ba2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:00 [async_llm.py:261] Added request cmpl-01bcfa116e5d429b82a3e24132df1ba2-0.
INFO 03-02 01:32:01 [logger.py:42] Received request cmpl-66c75de67ad44269826422b8f21e72a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:01 [async_llm.py:261] Added request cmpl-66c75de67ad44269826422b8f21e72a3-0.
INFO 03-02 01:32:02 [logger.py:42] Received request cmpl-8fd3b261fd9e4d75b12494af8c88e599-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:02 [async_llm.py:261] Added request cmpl-8fd3b261fd9e4d75b12494af8c88e599-0.
INFO 03-02 01:32:03 [logger.py:42] Received request cmpl-0d3ed20a3c46436bbbb627e67161a7d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:03 [async_llm.py:261] Added request cmpl-0d3ed20a3c46436bbbb627e67161a7d5-0.
INFO 03-02 01:32:04 [logger.py:42] Received request cmpl-0fc7ace5a6064ac59bc4b6b500c42620-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:04 [async_llm.py:261] Added request cmpl-0fc7ace5a6064ac59bc4b6b500c42620-0.
INFO 03-02 01:32:05 [logger.py:42] Received request cmpl-1220ba08475541c9a3e5103eac0cacf8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:05 [async_llm.py:261] Added request cmpl-1220ba08475541c9a3e5103eac0cacf8-0.
INFO 03-02 01:32:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:32:06 [logger.py:42] Received request cmpl-63ab3cdcd47f4bfe8d454a3e763870d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:06 [async_llm.py:261] Added request cmpl-63ab3cdcd47f4bfe8d454a3e763870d1-0.
INFO 03-02 01:32:07 [logger.py:42] Received request cmpl-6daeb3fd9ada49e68fae839cf3e27126-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:07 [async_llm.py:261] Added request cmpl-6daeb3fd9ada49e68fae839cf3e27126-0.
INFO 03-02 01:32:08 [logger.py:42] Received request cmpl-fb4ba8025510457ca774926fe91e6fd9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:08 [async_llm.py:261] Added request cmpl-fb4ba8025510457ca774926fe91e6fd9-0.
INFO 03-02 01:32:09 [logger.py:42] Received request cmpl-d37c769ac7ac471589b72b5b78c94686-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:09 [async_llm.py:261] Added request cmpl-d37c769ac7ac471589b72b5b78c94686-0.
INFO 03-02 01:32:10 [logger.py:42] Received request cmpl-3d2ae42ac1bc4f21ab63c33aec9f222d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:10 [async_llm.py:261] Added request cmpl-3d2ae42ac1bc4f21ab63c33aec9f222d-0.
INFO 03-02 01:32:11 [logger.py:42] Received request cmpl-3857783a5d874ba7ae88827e8857744d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:11 [async_llm.py:261] Added request cmpl-3857783a5d874ba7ae88827e8857744d-0.
INFO 03-02 01:32:13 [logger.py:42] Received request cmpl-df4449985cac48cc8f772e831334b922-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:13 [async_llm.py:261] Added request cmpl-df4449985cac48cc8f772e831334b922-0.
INFO 03-02 01:32:14 [logger.py:42] Received request cmpl-9dc3f58a30fc4dd1aa16122aa3e06b9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:14 [async_llm.py:261] Added request cmpl-9dc3f58a30fc4dd1aa16122aa3e06b9c-0.
INFO 03-02 01:32:15 [logger.py:42] Received request cmpl-9a904c6f109942fe87fe39b15fe66172-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:15 [async_llm.py:261] Added request cmpl-9a904c6f109942fe87fe39b15fe66172-0.
INFO 03-02 01:32:16 [logger.py:42] Received request cmpl-59edb3dea8d1407d90a5684ef81bb395-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:16 [async_llm.py:261] Added request cmpl-59edb3dea8d1407d90a5684ef81bb395-0.
INFO 03-02 01:32:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:32:17 [logger.py:42] Received request cmpl-4bc29b1ed2974611b01e25a079fe488f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:17 [async_llm.py:261] Added request cmpl-4bc29b1ed2974611b01e25a079fe488f-0.
INFO 03-02 01:32:18 [logger.py:42] Received request cmpl-4b385187e9d64b7f970514570db710f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:18 [async_llm.py:261] Added request cmpl-4b385187e9d64b7f970514570db710f2-0.
INFO 03-02 01:32:19 [logger.py:42] Received request cmpl-1d410b4aca924cf59c8f17eff105edd6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:19 [async_llm.py:261] Added request cmpl-1d410b4aca924cf59c8f17eff105edd6-0.
INFO 03-02 01:32:20 [logger.py:42] Received request cmpl-67bb205c1b894bc7a975107f0c6f572d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:20 [async_llm.py:261] Added request cmpl-67bb205c1b894bc7a975107f0c6f572d-0.
INFO 03-02 01:32:21 [logger.py:42] Received request cmpl-8c7cdda0db5e4d5696a707c2452e1192-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:21 [async_llm.py:261] Added request cmpl-8c7cdda0db5e4d5696a707c2452e1192-0.
INFO 03-02 01:32:22 [logger.py:42] Received request cmpl-ff9183f597cb4acc84474d06dbae6e74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:22 [async_llm.py:261] Added request cmpl-ff9183f597cb4acc84474d06dbae6e74-0.
INFO 03-02 01:32:23 [logger.py:42] Received request cmpl-b22dd31207b94d83a7920edb86231516-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:23 [async_llm.py:261] Added request cmpl-b22dd31207b94d83a7920edb86231516-0.
INFO 03-02 01:32:25 [logger.py:42] Received request cmpl-6dfe702e4ddd4c73ab20a33794b3ff2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:25 [async_llm.py:261] Added request cmpl-6dfe702e4ddd4c73ab20a33794b3ff2e-0.
INFO 03-02 01:32:26 [logger.py:42] Received request cmpl-a889408f1ac146c893d3e492371b6cc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:26 [async_llm.py:261] Added request cmpl-a889408f1ac146c893d3e492371b6cc7-0.
INFO 03-02 01:32:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:32:27 [logger.py:42] Received request cmpl-501778191204464283390459e7095035-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:27 [async_llm.py:261] Added request cmpl-501778191204464283390459e7095035-0.
INFO 03-02 01:32:28 [logger.py:42] Received request cmpl-c93804a8b8054f0d809f77cb07a1e673-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:28 [async_llm.py:261] Added request cmpl-c93804a8b8054f0d809f77cb07a1e673-0.
INFO 03-02 01:32:29 [logger.py:42] Received request cmpl-1c544a6d76414dd992e5de4bb8807271-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:29 [async_llm.py:261] Added request cmpl-1c544a6d76414dd992e5de4bb8807271-0.
INFO 03-02 01:32:30 [logger.py:42] Received request cmpl-86eaa1d34ea448b59a331183a0b6ecb4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:30 [async_llm.py:261] Added request cmpl-86eaa1d34ea448b59a331183a0b6ecb4-0.
INFO 03-02 01:32:31 [logger.py:42] Received request cmpl-7388787af3e2417abcd1b11823f6dd02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:31 [async_llm.py:261] Added request cmpl-7388787af3e2417abcd1b11823f6dd02-0.
INFO 03-02 01:32:32 [logger.py:42] Received request cmpl-010de1e0c39147379562bae881ed57f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:32 [async_llm.py:261] Added request cmpl-010de1e0c39147379562bae881ed57f2-0.
INFO 03-02 01:32:33 [logger.py:42] Received request cmpl-eb751862de8a40d783db373e78914737-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:33 [async_llm.py:261] Added request cmpl-eb751862de8a40d783db373e78914737-0.
INFO 03-02 01:32:34 [logger.py:42] Received request cmpl-43de803a93f741b8b31601f6c57a4781-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:34 [async_llm.py:261] Added request cmpl-43de803a93f741b8b31601f6c57a4781-0.
INFO 03-02 01:32:35 [logger.py:42] Received request cmpl-9a9c153198cc4b83ad9a5b2f36f628be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:35 [async_llm.py:261] Added request cmpl-9a9c153198cc4b83ad9a5b2f36f628be-0.
INFO 03-02 01:32:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:32:36 [logger.py:42] Received request cmpl-5a447121bfc74b79ad4c9615d3913f00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:36 [async_llm.py:261] Added request cmpl-5a447121bfc74b79ad4c9615d3913f00-0.
INFO 03-02 01:32:38 [logger.py:42] Received request cmpl-8c2ab6b20de04db8afd7f87f7832cfc6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:38 [async_llm.py:261] Added request cmpl-8c2ab6b20de04db8afd7f87f7832cfc6-0.
INFO 03-02 01:32:39 [logger.py:42] Received request cmpl-1cd18d89daba4ddaa6b6cb12718c0e60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:39 [async_llm.py:261] Added request cmpl-1cd18d89daba4ddaa6b6cb12718c0e60-0.
INFO 03-02 01:32:40 [logger.py:42] Received request cmpl-1179bb01efc54340bcdcfa65be631b0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:40 [async_llm.py:261] Added request cmpl-1179bb01efc54340bcdcfa65be631b0f-0.
INFO 03-02 01:32:41 [logger.py:42] Received request cmpl-83671a83e20f473ebe0a2637e3077b2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:41 [async_llm.py:261] Added request cmpl-83671a83e20f473ebe0a2637e3077b2d-0.
INFO 03-02 01:32:42 [logger.py:42] Received request cmpl-ba540c3d51d14cde96f95cdb0c58b4c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:42 [async_llm.py:261] Added request cmpl-ba540c3d51d14cde96f95cdb0c58b4c7-0.
INFO 03-02 01:32:43 [logger.py:42] Received request cmpl-a381692147cb4fc3936f8fe090f042e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:43 [async_llm.py:261] Added request cmpl-a381692147cb4fc3936f8fe090f042e4-0.
INFO 03-02 01:32:44 [logger.py:42] Received request cmpl-b09f57537e7142c98c47e31f62bf8417-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:44 [async_llm.py:261] Added request cmpl-b09f57537e7142c98c47e31f62bf8417-0.
INFO 03-02 01:32:45 [logger.py:42] Received request cmpl-a0cd3ea49f5f41be8cd72aae240a578a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:45 [async_llm.py:261] Added request cmpl-a0cd3ea49f5f41be8cd72aae240a578a-0.
INFO 03-02 01:32:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:32:46 [logger.py:42] Received request cmpl-951249a5367b4c38b22c95a05e3da48f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:46 [async_llm.py:261] Added request cmpl-951249a5367b4c38b22c95a05e3da48f-0.
INFO 03-02 01:32:47 [logger.py:42] Received request cmpl-bc52dc1d97514f3186c0fffe0be2247b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:47 [async_llm.py:261] Added request cmpl-bc52dc1d97514f3186c0fffe0be2247b-0.
INFO 03-02 01:32:48 [logger.py:42] Received request cmpl-779f6ffcde624f49beeff2845b2d251f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:48 [async_llm.py:261] Added request cmpl-779f6ffcde624f49beeff2845b2d251f-0.
INFO 03-02 01:32:49 [logger.py:42] Received request cmpl-e5e796d951d448ff8a0ff296168772a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:49 [async_llm.py:261] Added request cmpl-e5e796d951d448ff8a0ff296168772a8-0.
INFO 03-02 01:32:51 [logger.py:42] Received request cmpl-05001cb5924f44b48b9ee9b2fb3cc78a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:51 [async_llm.py:261] Added request cmpl-05001cb5924f44b48b9ee9b2fb3cc78a-0.
INFO 03-02 01:32:52 [logger.py:42] Received request cmpl-bfcdd183e35d43c1ab56ffbcf860c6f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:52 [async_llm.py:261] Added request cmpl-bfcdd183e35d43c1ab56ffbcf860c6f2-0.
INFO 03-02 01:32:53 [logger.py:42] Received request cmpl-b2606f68bfdc4353962503a5a3507e4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:53 [async_llm.py:261] Added request cmpl-b2606f68bfdc4353962503a5a3507e4f-0.
INFO 03-02 01:32:54 [logger.py:42] Received request cmpl-a3b9138664b7446bb61392ca850259c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:54 [async_llm.py:261] Added request cmpl-a3b9138664b7446bb61392ca850259c5-0.
INFO 03-02 01:32:55 [logger.py:42] Received request cmpl-980cd7621b994cb4b8adf9e501e50012-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:55 [async_llm.py:261] Added request cmpl-980cd7621b994cb4b8adf9e501e50012-0.
INFO 03-02 01:32:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:32:56 [logger.py:42] Received request cmpl-1c25935a1e1846ad8ed8406f74432bab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:56 [async_llm.py:261] Added request cmpl-1c25935a1e1846ad8ed8406f74432bab-0.
INFO 03-02 01:32:57 [logger.py:42] Received request cmpl-a831527ae8af4948971a0e64d97ecd38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:57 [async_llm.py:261] Added request cmpl-a831527ae8af4948971a0e64d97ecd38-0.
INFO 03-02 01:32:58 [logger.py:42] Received request cmpl-fb8170a2c14a42bbb2466461417cf823-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:58 [async_llm.py:261] Added request cmpl-fb8170a2c14a42bbb2466461417cf823-0.
INFO 03-02 01:32:59 [logger.py:42] Received request cmpl-2c37845824104e5480ae2bbab20fceee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:59 [async_llm.py:261] Added request cmpl-2c37845824104e5480ae2bbab20fceee-0.
INFO 03-02 01:33:00 [logger.py:42] Received request cmpl-f8b9f6df113c4e6191fb248238da4dfd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:00 [async_llm.py:261] Added request cmpl-f8b9f6df113c4e6191fb248238da4dfd-0.
INFO 03-02 01:33:01 [logger.py:42] Received request cmpl-ae3cbacbb00b42a5a255598c3d1218ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:01 [async_llm.py:261] Added request cmpl-ae3cbacbb00b42a5a255598c3d1218ea-0.
INFO 03-02 01:33:02 [logger.py:42] Received request cmpl-35ea6d5ad48445d2857dca1216b30c40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:02 [async_llm.py:261] Added request cmpl-35ea6d5ad48445d2857dca1216b30c40-0.
INFO 03-02 01:33:04 [logger.py:42] Received request cmpl-4087f89ef27240ae8368ee53baeae1f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:04 [async_llm.py:261] Added request cmpl-4087f89ef27240ae8368ee53baeae1f4-0.
INFO 03-02 01:33:05 [logger.py:42] Received request cmpl-703a37bdbdb34ea99ce013d038f3b9c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:05 [async_llm.py:261] Added request cmpl-703a37bdbdb34ea99ce013d038f3b9c9-0.
INFO 03-02 01:33:06 [logger.py:42] Received request cmpl-5cda40311a1b4c07819eaea430d2d155-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:06 [async_llm.py:261] Added request cmpl-5cda40311a1b4c07819eaea430d2d155-0.
INFO 03-02 01:33:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:33:07 [logger.py:42] Received request cmpl-195af087a04841bcb94187a749eeab72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:07 [async_llm.py:261] Added request cmpl-195af087a04841bcb94187a749eeab72-0.
INFO 03-02 01:33:08 [logger.py:42] Received request cmpl-dad533288d784008804a38578ffe47ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:08 [async_llm.py:261] Added request cmpl-dad533288d784008804a38578ffe47ee-0.
INFO 03-02 01:33:09 [logger.py:42] Received request cmpl-925e3892262e48608d953a1c8721c78a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:09 [async_llm.py:261] Added request cmpl-925e3892262e48608d953a1c8721c78a-0.
INFO 03-02 01:33:10 [logger.py:42] Received request cmpl-0cf13fc7a08f415b8bdecb19eafc8422-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:10 [async_llm.py:261] Added request cmpl-0cf13fc7a08f415b8bdecb19eafc8422-0.
INFO 03-02 01:33:11 [logger.py:42] Received request cmpl-199f48bbd0c74cb3801132224bcc04e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:11 [async_llm.py:261] Added request cmpl-199f48bbd0c74cb3801132224bcc04e1-0.
INFO 03-02 01:33:12 [logger.py:42] Received request cmpl-3ce9318aca3f434a8f1b1a0887ebcb71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:12 [async_llm.py:261] Added request cmpl-3ce9318aca3f434a8f1b1a0887ebcb71-0.
INFO 03-02 01:33:13 [logger.py:42] Received request cmpl-e0fd175cce3a4e2a876e472216b232ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:13 [async_llm.py:261] Added request cmpl-e0fd175cce3a4e2a876e472216b232ab-0.
INFO 03-02 01:33:14 [logger.py:42] Received request cmpl-2a836b4e679c49ff8a4804020f3540fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:14 [async_llm.py:261] Added request cmpl-2a836b4e679c49ff8a4804020f3540fe-0.
INFO 03-02 01:33:15 [logger.py:42] Received request cmpl-b87730a6f0ce4e8fb7a827f240e8fa08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:15 [async_llm.py:261] Added request cmpl-b87730a6f0ce4e8fb7a827f240e8fa08-0.
INFO 03-02 01:33:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:33:17 [logger.py:42] Received request cmpl-ba4e10722019428ab4817334a804eb98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:17 [async_llm.py:261] Added request cmpl-ba4e10722019428ab4817334a804eb98-0.
INFO 03-02 01:33:18 [logger.py:42] Received request cmpl-070ec3d70bb84f0cbc97cc87e1e649a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:18 [async_llm.py:261] Added request cmpl-070ec3d70bb84f0cbc97cc87e1e649a8-0.
INFO 03-02 01:33:19 [logger.py:42] Received request cmpl-d2b9660e321b4dd5b6c875d6491e223f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:19 [async_llm.py:261] Added request cmpl-d2b9660e321b4dd5b6c875d6491e223f-0.
INFO 03-02 01:33:20 [logger.py:42] Received request cmpl-40136721a5a0424e99ff4d8411a07bc8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:20 [async_llm.py:261] Added request cmpl-40136721a5a0424e99ff4d8411a07bc8-0.
INFO 03-02 01:33:21 [logger.py:42] Received request cmpl-22fb08e2e918474d8d2d8bfbc6cc4602-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:21 [async_llm.py:261] Added request cmpl-22fb08e2e918474d8d2d8bfbc6cc4602-0.
INFO 03-02 01:33:22 [logger.py:42] Received request cmpl-4444a724279a4e3c9e8ffe1ca2f9ae19-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:22 [async_llm.py:261] Added request cmpl-4444a724279a4e3c9e8ffe1ca2f9ae19-0.
INFO 03-02 01:33:23 [logger.py:42] Received request cmpl-2415c41d460e4887b93db1549c75759e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:23 [async_llm.py:261] Added request cmpl-2415c41d460e4887b93db1549c75759e-0.
INFO 03-02 01:33:24 [logger.py:42] Received request cmpl-38a3494b52464ccebd041f5636d6ec44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:24 [async_llm.py:261] Added request cmpl-38a3494b52464ccebd041f5636d6ec44-0.
INFO 03-02 01:33:25 [logger.py:42] Received request cmpl-0b7723f5f81a4d12b42a7124e41cfa6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:25 [async_llm.py:261] Added request cmpl-0b7723f5f81a4d12b42a7124e41cfa6e-0.
INFO 03-02 01:33:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:33:26 [logger.py:42] Received request cmpl-e127568ea1074cce96b1c01df0cf6ffe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:26 [async_llm.py:261] Added request cmpl-e127568ea1074cce96b1c01df0cf6ffe-0.
INFO 03-02 01:33:27 [logger.py:42] Received request cmpl-5e926444591948ca8af538d608424778-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:27 [async_llm.py:261] Added request cmpl-5e926444591948ca8af538d608424778-0.
INFO 03-02 01:33:28 [logger.py:42] Received request cmpl-37515937acc342a69d8b0fbdcc49f7a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:28 [async_llm.py:261] Added request cmpl-37515937acc342a69d8b0fbdcc49f7a0-0.
INFO 03-02 01:33:30 [logger.py:42] Received request cmpl-9e0d954d076a4934a5741c774d6b39a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:30 [async_llm.py:261] Added request cmpl-9e0d954d076a4934a5741c774d6b39a3-0.
INFO 03-02 01:33:31 [logger.py:42] Received request cmpl-9b53a6d8f9ff4c4493f937ed79dd9c21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:31 [async_llm.py:261] Added request cmpl-9b53a6d8f9ff4c4493f937ed79dd9c21-0.
INFO 03-02 01:33:32 [logger.py:42] Received request cmpl-addca23426544c5ca825339535383a45-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:32 [async_llm.py:261] Added request cmpl-addca23426544c5ca825339535383a45-0.
INFO 03-02 01:33:33 [logger.py:42] Received request cmpl-c464bffe26b34b1e8db7f31b6f3c80f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:33 [async_llm.py:261] Added request cmpl-c464bffe26b34b1e8db7f31b6f3c80f8-0.
INFO 03-02 01:33:34 [logger.py:42] Received request cmpl-7b9bece0b2024a5ea4d2bccaaae485dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:34 [async_llm.py:261] Added request cmpl-7b9bece0b2024a5ea4d2bccaaae485dd-0.
INFO 03-02 01:33:35 [logger.py:42] Received request cmpl-83b6a4f4bf514134b251c60d962a85fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:35 [async_llm.py:261] Added request cmpl-83b6a4f4bf514134b251c60d962a85fb-0.
INFO 03-02 01:33:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:33:36 [logger.py:42] Received request cmpl-0e711d73fb1441808c1fd1517d12e6ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:36 [async_llm.py:261] Added request cmpl-0e711d73fb1441808c1fd1517d12e6ac-0.
INFO 03-02 01:33:37 [logger.py:42] Received request cmpl-46b89edb97f3475a8dbe70a12db0eac5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:37 [async_llm.py:261] Added request cmpl-46b89edb97f3475a8dbe70a12db0eac5-0.
INFO 03-02 01:33:38 [logger.py:42] Received request cmpl-7ba7b9bbb61d4895936713d08866d774-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:38 [async_llm.py:261] Added request cmpl-7ba7b9bbb61d4895936713d08866d774-0.
INFO 03-02 01:33:39 [logger.py:42] Received request cmpl-7cdb0119b4b34aa98adf04c00f36f275-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:39 [async_llm.py:261] Added request cmpl-7cdb0119b4b34aa98adf04c00f36f275-0.
INFO 03-02 01:33:40 [logger.py:42] Received request cmpl-2658460fec97415d87339bd169ee100e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:40 [async_llm.py:261] Added request cmpl-2658460fec97415d87339bd169ee100e-0.
INFO 03-02 01:33:41 [logger.py:42] Received request cmpl-c0c710a42dbe4707a5686ed2abead366-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:41 [async_llm.py:261] Added request cmpl-c0c710a42dbe4707a5686ed2abead366-0.
INFO 03-02 01:33:43 [logger.py:42] Received request cmpl-a1aca6889d4e4db9a4314f466cf584b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:43 [async_llm.py:261] Added request cmpl-a1aca6889d4e4db9a4314f466cf584b9-0.
INFO 03-02 01:33:44 [logger.py:42] Received request cmpl-fb1e808cb0d647d18f40ebed890bc3b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:44 [async_llm.py:261] Added request cmpl-fb1e808cb0d647d18f40ebed890bc3b7-0.
INFO 03-02 01:33:45 [logger.py:42] Received request cmpl-ed0fa1b3916149efbd623b357b26be58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:45 [async_llm.py:261] Added request cmpl-ed0fa1b3916149efbd623b357b26be58-0.
INFO 03-02 01:33:46 [logger.py:42] Received request cmpl-a80f367128984dae81ab566e97c4c0ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:46 [async_llm.py:261] Added request cmpl-a80f367128984dae81ab566e97c4c0ca-0.
INFO 03-02 01:33:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:33:47 [logger.py:42] Received request cmpl-a469d5b86e514470a5b1d6d1e636ddd4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:47 [async_llm.py:261] Added request cmpl-a469d5b86e514470a5b1d6d1e636ddd4-0.
INFO 03-02 01:33:48 [logger.py:42] Received request cmpl-995c7e1f47da4cce813b8168d83e4f5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:48 [async_llm.py:261] Added request cmpl-995c7e1f47da4cce813b8168d83e4f5f-0.
INFO 03-02 01:33:49 [logger.py:42] Received request cmpl-c5beffd0a079476a89f779090c07bac1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:49 [async_llm.py:261] Added request cmpl-c5beffd0a079476a89f779090c07bac1-0.
INFO 03-02 01:33:50 [logger.py:42] Received request cmpl-44f0d3fa1d48477185665342c23fbacb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:50 [async_llm.py:261] Added request cmpl-44f0d3fa1d48477185665342c23fbacb-0.
INFO 03-02 01:33:51 [logger.py:42] Received request cmpl-0b2ae0f02b4e45a993755732c243b145-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:51 [async_llm.py:261] Added request cmpl-0b2ae0f02b4e45a993755732c243b145-0.
INFO 03-02 01:33:52 [logger.py:42] Received request cmpl-0d4255d16a1b44a39faccd134efbc382-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:52 [async_llm.py:261] Added request cmpl-0d4255d16a1b44a39faccd134efbc382-0.
INFO 03-02 01:33:53 [logger.py:42] Received request cmpl-144d83940f8245798f48066422f72715-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:53 [async_llm.py:261] Added request cmpl-144d83940f8245798f48066422f72715-0.
INFO 03-02 01:33:54 [logger.py:42] Received request cmpl-e5806cb6914346debd6fd2ad08ebf509-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:54 [async_llm.py:261] Added request cmpl-e5806cb6914346debd6fd2ad08ebf509-0.
INFO 03-02 01:33:56 [logger.py:42] Received request cmpl-16a5e34831c8441991068929b732180f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:56 [async_llm.py:261] Added request cmpl-16a5e34831c8441991068929b732180f-0.
INFO 03-02 01:33:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:33:57 [logger.py:42] Received request cmpl-bc56b7c55a7f4af08c20a6dad4fb3f72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:57 [async_llm.py:261] Added request cmpl-bc56b7c55a7f4af08c20a6dad4fb3f72-0.
INFO 03-02 01:33:58 [logger.py:42] Received request cmpl-b51953c2300b49458b7bfe32c6e216dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:58 [async_llm.py:261] Added request cmpl-b51953c2300b49458b7bfe32c6e216dd-0.
INFO 03-02 01:33:59 [logger.py:42] Received request cmpl-9902d0ecb0ce4b839ce185b63c626923-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:59 [async_llm.py:261] Added request cmpl-9902d0ecb0ce4b839ce185b63c626923-0.
INFO 03-02 01:34:00 [logger.py:42] Received request cmpl-856b218eeeeb41b5988a2f9b3566cbc0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:00 [async_llm.py:261] Added request cmpl-856b218eeeeb41b5988a2f9b3566cbc0-0.
INFO 03-02 01:34:01 [logger.py:42] Received request cmpl-e7139837c84a42039b3291963783a61d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:01 [async_llm.py:261] Added request cmpl-e7139837c84a42039b3291963783a61d-0.
INFO 03-02 01:34:02 [logger.py:42] Received request cmpl-1860ce8b80204f60aa2ba65bf605b7a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:02 [async_llm.py:261] Added request cmpl-1860ce8b80204f60aa2ba65bf605b7a3-0.
INFO 03-02 01:34:03 [logger.py:42] Received request cmpl-bf3ea17d1bcf49a68deb42c3793e0150-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:03 [async_llm.py:261] Added request cmpl-bf3ea17d1bcf49a68deb42c3793e0150-0.
INFO 03-02 01:34:04 [logger.py:42] Received request cmpl-d89f9b7ff83748c8a82eff3fa3d51b59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:04 [async_llm.py:261] Added request cmpl-d89f9b7ff83748c8a82eff3fa3d51b59-0.
INFO 03-02 01:34:05 [logger.py:42] Received request cmpl-2f00f9e576d14ce3b6179824627c0bd4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:05 [async_llm.py:261] Added request cmpl-2f00f9e576d14ce3b6179824627c0bd4-0.
INFO 03-02 01:34:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:34:06 [logger.py:42] Received request cmpl-a68f2d576df345d1bc5fdd82d0114b57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:06 [async_llm.py:261] Added request cmpl-a68f2d576df345d1bc5fdd82d0114b57-0.
INFO 03-02 01:34:08 [logger.py:42] Received request cmpl-83f6920f4fea425a99ae3e19c7bab2b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:08 [async_llm.py:261] Added request cmpl-83f6920f4fea425a99ae3e19c7bab2b5-0.
INFO 03-02 01:34:09 [logger.py:42] Received request cmpl-84d833dd003845c3a5a770fc7f04fa36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:09 [async_llm.py:261] Added request cmpl-84d833dd003845c3a5a770fc7f04fa36-0.
INFO 03-02 01:34:10 [logger.py:42] Received request cmpl-e7c52c9b39fd49c2b953160a9f432193-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:10 [async_llm.py:261] Added request cmpl-e7c52c9b39fd49c2b953160a9f432193-0.
INFO 03-02 01:34:11 [logger.py:42] Received request cmpl-6c8825fb6dc945a7be9ca385e7d56b3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:11 [async_llm.py:261] Added request cmpl-6c8825fb6dc945a7be9ca385e7d56b3c-0.
INFO 03-02 01:34:12 [logger.py:42] Received request cmpl-a28206a8247d4f1896625e67fdfabe8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:12 [async_llm.py:261] Added request cmpl-a28206a8247d4f1896625e67fdfabe8e-0.
INFO 03-02 01:34:13 [logger.py:42] Received request cmpl-7a490eba22b24ba5acbd032b2a7e952a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:13 [async_llm.py:261] Added request cmpl-7a490eba22b24ba5acbd032b2a7e952a-0.
INFO 03-02 01:34:14 [logger.py:42] Received request cmpl-619084f8fd8f49b8b48fd17fbcfdb197-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:14 [async_llm.py:261] Added request cmpl-619084f8fd8f49b8b48fd17fbcfdb197-0.
INFO 03-02 01:34:15 [logger.py:42] Received request cmpl-485b7a833d1040688a8c4a739bc6dcbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:15 [async_llm.py:261] Added request cmpl-485b7a833d1040688a8c4a739bc6dcbd-0.
INFO 03-02 01:34:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:34:16 [logger.py:42] Received request cmpl-0bace8216fba4c4da043b61011dd22d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:16 [async_llm.py:261] Added request cmpl-0bace8216fba4c4da043b61011dd22d6-0.
INFO 03-02 01:34:17 [logger.py:42] Received request cmpl-1ddbdbb7f39a4a99bb27f2752ad2d969-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:17 [async_llm.py:261] Added request cmpl-1ddbdbb7f39a4a99bb27f2752ad2d969-0.
INFO 03-02 01:34:18 [logger.py:42] Received request cmpl-61bb38988e654f10bd58c816e618489e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:18 [async_llm.py:261] Added request cmpl-61bb38988e654f10bd58c816e618489e-0.
INFO 03-02 01:34:19 [logger.py:42] Received request cmpl-275f7250e5564301831289424ed5f44f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:19 [async_llm.py:261] Added request cmpl-275f7250e5564301831289424ed5f44f-0.
INFO 03-02 01:34:21 [logger.py:42] Received request cmpl-34e972dd837b49f79ad4527ad96dee26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:21 [async_llm.py:261] Added request cmpl-34e972dd837b49f79ad4527ad96dee26-0.
INFO 03-02 01:34:22 [logger.py:42] Received request cmpl-f71aa34f6a564e9ab1a6cff953308f62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:22 [async_llm.py:261] Added request cmpl-f71aa34f6a564e9ab1a6cff953308f62-0.
INFO 03-02 01:34:23 [logger.py:42] Received request cmpl-cc204e78c72243bcbc936c004898b852-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:23 [async_llm.py:261] Added request cmpl-cc204e78c72243bcbc936c004898b852-0.
INFO 03-02 01:34:24 [logger.py:42] Received request cmpl-8570f6aa0a3244b3a0eeecb5d8860321-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:24 [async_llm.py:261] Added request cmpl-8570f6aa0a3244b3a0eeecb5d8860321-0.
INFO 03-02 01:34:25 [logger.py:42] Received request cmpl-8905ee31dd3343e0a3b7ddb0e6c75515-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:25 [async_llm.py:261] Added request cmpl-8905ee31dd3343e0a3b7ddb0e6c75515-0.
INFO 03-02 01:34:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:34:26 [logger.py:42] Received request cmpl-925519551a994ef4a8447160f828ed7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:26 [async_llm.py:261] Added request cmpl-925519551a994ef4a8447160f828ed7b-0.
INFO 03-02 01:34:27 [logger.py:42] Received request cmpl-38c4a19e9f8844e292221003d006f933-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:27 [async_llm.py:261] Added request cmpl-38c4a19e9f8844e292221003d006f933-0.
INFO 03-02 01:34:28 [logger.py:42] Received request cmpl-b3cee855e0f54f21a9b6406f2f5288ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:28 [async_llm.py:261] Added request cmpl-b3cee855e0f54f21a9b6406f2f5288ee-0.
INFO 03-02 01:34:29 [logger.py:42] Received request cmpl-2dca319d56944f16ae161f0b77cf3ab7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:29 [async_llm.py:261] Added request cmpl-2dca319d56944f16ae161f0b77cf3ab7-0.
INFO 03-02 01:34:30 [logger.py:42] Received request cmpl-72837cccfd9d45c1a3c525f3a3081a43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:30 [async_llm.py:261] Added request cmpl-72837cccfd9d45c1a3c525f3a3081a43-0.
INFO 03-02 01:34:31 [logger.py:42] Received request cmpl-01e3903ece3c42dfae3b401dd23908f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:31 [async_llm.py:261] Added request cmpl-01e3903ece3c42dfae3b401dd23908f1-0.
INFO 03-02 01:34:32 [logger.py:42] Received request cmpl-bae199667f6c4d6e96cd0f5af93adca4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:32 [async_llm.py:261] Added request cmpl-bae199667f6c4d6e96cd0f5af93adca4-0.
INFO 03-02 01:34:34 [logger.py:42] Received request cmpl-38689f9190414eadb930dd3c9d19ce38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:34 [async_llm.py:261] Added request cmpl-38689f9190414eadb930dd3c9d19ce38-0.
INFO 03-02 01:34:35 [logger.py:42] Received request cmpl-d5c0ba8dfe504aa7b955eca97e37f1fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:35 [async_llm.py:261] Added request cmpl-d5c0ba8dfe504aa7b955eca97e37f1fe-0.
INFO 03-02 01:34:36 [logger.py:42] Received request cmpl-3a5b4bf7252d43b3a99a911fe35dc471-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:36 [async_llm.py:261] Added request cmpl-3a5b4bf7252d43b3a99a911fe35dc471-0.
INFO 03-02 01:34:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:34:37 [logger.py:42] Received request cmpl-9ee127e8cbca4fd08bcb072fc3721bbc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:37 [async_llm.py:261] Added request cmpl-9ee127e8cbca4fd08bcb072fc3721bbc-0.
INFO 03-02 01:34:38 [logger.py:42] Received request cmpl-f5ecbae1b9d54ce997e319d09ed9fe2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:38 [async_llm.py:261] Added request cmpl-f5ecbae1b9d54ce997e319d09ed9fe2c-0.
INFO 03-02 01:34:39 [logger.py:42] Received request cmpl-e2ee727f5c7a444891aa1f289ba17f36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:39 [async_llm.py:261] Added request cmpl-e2ee727f5c7a444891aa1f289ba17f36-0.
INFO 03-02 01:34:40 [logger.py:42] Received request cmpl-59c2af3927354487826e46e1a46ea10b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:40 [async_llm.py:261] Added request cmpl-59c2af3927354487826e46e1a46ea10b-0.
INFO 03-02 01:34:41 [logger.py:42] Received request cmpl-10111521dee0490d96d0ae6c0e81618a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:41 [async_llm.py:261] Added request cmpl-10111521dee0490d96d0ae6c0e81618a-0.
INFO 03-02 01:34:42 [logger.py:42] Received request cmpl-2da35e425a8d4ec784e4dd3f24d9ed6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:42 [async_llm.py:261] Added request cmpl-2da35e425a8d4ec784e4dd3f24d9ed6d-0.
INFO 03-02 01:34:43 [logger.py:42] Received request cmpl-b3706c96f7d14877a57920fcf7b26681-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:43 [async_llm.py:261] Added request cmpl-b3706c96f7d14877a57920fcf7b26681-0.
INFO 03-02 01:34:44 [logger.py:42] Received request cmpl-3beb278fe214478a9e25ee2eb84e8434-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:44 [async_llm.py:261] Added request cmpl-3beb278fe214478a9e25ee2eb84e8434-0.
INFO 03-02 01:34:45 [logger.py:42] Received request cmpl-42a683cbd4024233a04ba7de26f8d5c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:45 [async_llm.py:261] Added request cmpl-42a683cbd4024233a04ba7de26f8d5c1-0.
INFO 03-02 01:34:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:34:47 [logger.py:42] Received request cmpl-c3e104e620634a71a62358cce5aabd99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:47 [async_llm.py:261] Added request cmpl-c3e104e620634a71a62358cce5aabd99-0.
INFO 03-02 01:34:48 [logger.py:42] Received request cmpl-03d2ad2bc1b04f42976911117425b55b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:48 [async_llm.py:261] Added request cmpl-03d2ad2bc1b04f42976911117425b55b-0.
INFO 03-02 01:34:49 [logger.py:42] Received request cmpl-6ef5264cae07464b8a5a9e068ecfb25d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:49 [async_llm.py:261] Added request cmpl-6ef5264cae07464b8a5a9e068ecfb25d-0.
INFO 03-02 01:34:50 [logger.py:42] Received request cmpl-53111bd82b664904b76d31a166384798-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:50 [async_llm.py:261] Added request cmpl-53111bd82b664904b76d31a166384798-0.
INFO 03-02 01:34:51 [logger.py:42] Received request cmpl-0d8acfedbbd64ab39f469ce1f46f6177-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:51 [async_llm.py:261] Added request cmpl-0d8acfedbbd64ab39f469ce1f46f6177-0.
INFO 03-02 01:34:52 [logger.py:42] Received request cmpl-8ecc7843316a4020bab035f46aea26af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:52 [async_llm.py:261] Added request cmpl-8ecc7843316a4020bab035f46aea26af-0.
INFO 03-02 01:34:53 [logger.py:42] Received request cmpl-a36de106bb8e4e64beedba2e12390242-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:53 [async_llm.py:261] Added request cmpl-a36de106bb8e4e64beedba2e12390242-0.
INFO 03-02 01:34:54 [logger.py:42] Received request cmpl-bb80a53132dc4d3cb56329c06725b4b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:54 [async_llm.py:261] Added request cmpl-bb80a53132dc4d3cb56329c06725b4b2-0.
INFO 03-02 01:34:55 [logger.py:42] Received request cmpl-62f0247dc32c4e70ac6423229cd9a7ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:55 [async_llm.py:261] Added request cmpl-62f0247dc32c4e70ac6423229cd9a7ff-0.
INFO 03-02 01:34:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:34:56 [logger.py:42] Received request cmpl-872d82bb76ff468bb6f6e17770e7b674-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:56 [async_llm.py:261] Added request cmpl-872d82bb76ff468bb6f6e17770e7b674-0.
INFO 03-02 01:34:57 [logger.py:42] Received request cmpl-76188d9fef4542a28daf5cb5495c20bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:57 [async_llm.py:261] Added request cmpl-76188d9fef4542a28daf5cb5495c20bc-0.
INFO 03-02 01:34:58 [logger.py:42] Received request cmpl-ded9f50093a14f4395d183f2797fd2cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:58 [async_llm.py:261] Added request cmpl-ded9f50093a14f4395d183f2797fd2cb-0.
INFO 03-02 01:35:00 [logger.py:42] Received request cmpl-6532815a4bf04633863dd3873a9d0233-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:00 [async_llm.py:261] Added request cmpl-6532815a4bf04633863dd3873a9d0233-0.
INFO 03-02 01:35:01 [logger.py:42] Received request cmpl-d7e3789dc9314901a36601f6aa080165-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:01 [async_llm.py:261] Added request cmpl-d7e3789dc9314901a36601f6aa080165-0.
INFO 03-02 01:35:02 [logger.py:42] Received request cmpl-60314dde3811481ca7ae30516c6fb6e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:02 [async_llm.py:261] Added request cmpl-60314dde3811481ca7ae30516c6fb6e9-0.
INFO 03-02 01:35:03 [logger.py:42] Received request cmpl-a656fc3b69ad459fbefabc226babe62a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:03 [async_llm.py:261] Added request cmpl-a656fc3b69ad459fbefabc226babe62a-0.
INFO 03-02 01:35:04 [logger.py:42] Received request cmpl-e3c509dde7db4cc393ffa427fee09ca3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:04 [async_llm.py:261] Added request cmpl-e3c509dde7db4cc393ffa427fee09ca3-0.
INFO 03-02 01:35:05 [logger.py:42] Received request cmpl-da2774fe12f645658f6a60712c544951-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:05 [async_llm.py:261] Added request cmpl-da2774fe12f645658f6a60712c544951-0.
INFO 03-02 01:35:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:35:06 [logger.py:42] Received request cmpl-7929dbbfcaa74a3ca62c5e02de278578-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:06 [async_llm.py:261] Added request cmpl-7929dbbfcaa74a3ca62c5e02de278578-0.
INFO 03-02 01:35:07 [logger.py:42] Received request cmpl-48ab9dca04be4d84ab6857602bea1318-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:07 [async_llm.py:261] Added request cmpl-48ab9dca04be4d84ab6857602bea1318-0.
INFO 03-02 01:35:08 [logger.py:42] Received request cmpl-5a69293da5b1450da7965f18c7daabec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:08 [async_llm.py:261] Added request cmpl-5a69293da5b1450da7965f18c7daabec-0.
INFO 03-02 01:35:09 [logger.py:42] Received request cmpl-0a00f47b90054b0fabff25de3fd07df8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:09 [async_llm.py:261] Added request cmpl-0a00f47b90054b0fabff25de3fd07df8-0.
INFO 03-02 01:35:10 [logger.py:42] Received request cmpl-56d7fd61c3d44419b893a9d2d5c0f84a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:10 [async_llm.py:261] Added request cmpl-56d7fd61c3d44419b893a9d2d5c0f84a-0.
INFO 03-02 01:35:11 [logger.py:42] Received request cmpl-995498867aaf4097a89fbf44d12e2a21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:11 [async_llm.py:261] Added request cmpl-995498867aaf4097a89fbf44d12e2a21-0.
INFO 03-02 01:35:13 [logger.py:42] Received request cmpl-ff35a410a65643bb83c0e766099be480-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:13 [async_llm.py:261] Added request cmpl-ff35a410a65643bb83c0e766099be480-0.
INFO 03-02 01:35:14 [logger.py:42] Received request cmpl-b61c8617fa8345e3b2d5652defe6ed7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:14 [async_llm.py:261] Added request cmpl-b61c8617fa8345e3b2d5652defe6ed7d-0.
INFO 03-02 01:35:15 [logger.py:42] Received request cmpl-790446d0c9d74895b8b2b2b4b821018a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:15 [async_llm.py:261] Added request cmpl-790446d0c9d74895b8b2b2b4b821018a-0.
INFO 03-02 01:35:16 [logger.py:42] Received request cmpl-e3e4803249e845a08756e982cd7030c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:16 [async_llm.py:261] Added request cmpl-e3e4803249e845a08756e982cd7030c6-0.
INFO 03-02 01:35:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:35:17 [logger.py:42] Received request cmpl-fffc822cdd5f4a11852658ab699ec09d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:17 [async_llm.py:261] Added request cmpl-fffc822cdd5f4a11852658ab699ec09d-0.
INFO 03-02 01:35:18 [logger.py:42] Received request cmpl-a44c8ee1841b4addb804bd9b343dcb3e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:18 [async_llm.py:261] Added request cmpl-a44c8ee1841b4addb804bd9b343dcb3e-0.
INFO 03-02 01:35:19 [logger.py:42] Received request cmpl-280914e64fdd45d9a048b6b88ff22cb1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:19 [async_llm.py:261] Added request cmpl-280914e64fdd45d9a048b6b88ff22cb1-0.
INFO 03-02 01:35:20 [logger.py:42] Received request cmpl-b81936b6a94e41b1a86572de9ee3b164-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:20 [async_llm.py:261] Added request cmpl-b81936b6a94e41b1a86572de9ee3b164-0.
INFO 03-02 01:35:21 [logger.py:42] Received request cmpl-a50e051e26bd4ca694df64814cf00bb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:21 [async_llm.py:261] Added request cmpl-a50e051e26bd4ca694df64814cf00bb5-0.
INFO 03-02 01:35:22 [logger.py:42] Received request cmpl-62fba9df1194426f99e73e06e5d78738-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:22 [async_llm.py:261] Added request cmpl-62fba9df1194426f99e73e06e5d78738-0.
INFO 03-02 01:35:23 [logger.py:42] Received request cmpl-6e9a16eb075e47409f20d432cb4e909f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:23 [async_llm.py:261] Added request cmpl-6e9a16eb075e47409f20d432cb4e909f-0.
INFO 03-02 01:35:24 [logger.py:42] Received request cmpl-930c07e0df694d72be6b82e68e47d25a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:24 [async_llm.py:261] Added request cmpl-930c07e0df694d72be6b82e68e47d25a-0.
INFO 03-02 01:35:26 [logger.py:42] Received request cmpl-db83b00a03944acfbe70face1212c0ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:26 [async_llm.py:261] Added request cmpl-db83b00a03944acfbe70face1212c0ba-0.
INFO 03-02 01:35:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:35:27 [logger.py:42] Received request cmpl-5d0488c4b5814ad8b46241bba511d048-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:27 [async_llm.py:261] Added request cmpl-5d0488c4b5814ad8b46241bba511d048-0.
INFO 03-02 01:35:28 [logger.py:42] Received request cmpl-8d4d1b5a8322469dabd4ccfab69790e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:28 [async_llm.py:261] Added request cmpl-8d4d1b5a8322469dabd4ccfab69790e9-0.
INFO 03-02 01:35:29 [logger.py:42] Received request cmpl-0ed1fec9ec064067b9db394d5b9ce5a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:29 [async_llm.py:261] Added request cmpl-0ed1fec9ec064067b9db394d5b9ce5a2-0.
INFO 03-02 01:35:30 [logger.py:42] Received request cmpl-8a94882bff0942b6a1086449eabdf28f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:30 [async_llm.py:261] Added request cmpl-8a94882bff0942b6a1086449eabdf28f-0.
INFO 03-02 01:35:31 [logger.py:42] Received request cmpl-40885f2fecad4d8ebc23ca5d58097538-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:31 [async_llm.py:261] Added request cmpl-40885f2fecad4d8ebc23ca5d58097538-0.
INFO 03-02 01:35:32 [logger.py:42] Received request cmpl-6724e334701d4c1a9cf848ab97b4e8dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:32 [async_llm.py:261] Added request cmpl-6724e334701d4c1a9cf848ab97b4e8dd-0.
INFO 03-02 01:35:33 [logger.py:42] Received request cmpl-f5ea011d323c4b7b8c6b91daf4e743a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:33 [async_llm.py:261] Added request cmpl-f5ea011d323c4b7b8c6b91daf4e743a2-0.
INFO 03-02 01:35:34 [logger.py:42] Received request cmpl-3abaa717c33b4879816c7a04893a6b97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:34 [async_llm.py:261] Added request cmpl-3abaa717c33b4879816c7a04893a6b97-0.
INFO 03-02 01:35:35 [logger.py:42] Received request cmpl-4821234953304c7187784c0bc07aeeaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:35 [async_llm.py:261] Added request cmpl-4821234953304c7187784c0bc07aeeaf-0.
INFO 03-02 01:35:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:35:36 [logger.py:42] Received request cmpl-456d5c6b2407414b85c020679e86f9d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:36 [async_llm.py:261] Added request cmpl-456d5c6b2407414b85c020679e86f9d0-0.
INFO 03-02 01:35:37 [logger.py:42] Received request cmpl-b11e7ee762274c7b8364cde68d5574f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:37 [async_llm.py:261] Added request cmpl-b11e7ee762274c7b8364cde68d5574f3-0.
INFO 03-02 01:35:39 [logger.py:42] Received request cmpl-9be7348e4cd148b096726210e4424b01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:39 [async_llm.py:261] Added request cmpl-9be7348e4cd148b096726210e4424b01-0.
INFO 03-02 01:35:40 [logger.py:42] Received request cmpl-9bd40a1b76bc48f49cc1684b7f8aae98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:40 [async_llm.py:261] Added request cmpl-9bd40a1b76bc48f49cc1684b7f8aae98-0.
INFO 03-02 01:35:41 [logger.py:42] Received request cmpl-947765de465944689539d46785d6aabf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:41 [async_llm.py:261] Added request cmpl-947765de465944689539d46785d6aabf-0.
INFO 03-02 01:35:42 [logger.py:42] Received request cmpl-a707deb8131643beb71d238d574701f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:42 [async_llm.py:261] Added request cmpl-a707deb8131643beb71d238d574701f2-0.
INFO 03-02 01:35:43 [logger.py:42] Received request cmpl-5a38ba930ef04ad9a1cf10a0f1e6df50-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:43 [async_llm.py:261] Added request cmpl-5a38ba930ef04ad9a1cf10a0f1e6df50-0.
INFO 03-02 01:35:44 [logger.py:42] Received request cmpl-f5a1f4d3088b4ac3aad037848ad91ca6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:44 [async_llm.py:261] Added request cmpl-f5a1f4d3088b4ac3aad037848ad91ca6-0.
INFO 03-02 01:35:45 [logger.py:42] Received request cmpl-4d0767e59977445980ea17e03f501f9f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:45 [async_llm.py:261] Added request cmpl-4d0767e59977445980ea17e03f501f9f-0.
INFO 03-02 01:35:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:35:46 [logger.py:42] Received request cmpl-b030dd7cccc54c6e95cf3decf859d97a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:46 [async_llm.py:261] Added request cmpl-b030dd7cccc54c6e95cf3decf859d97a-0.
INFO 03-02 01:35:47 [logger.py:42] Received request cmpl-dc1c2dc4f83b4438a26bce60823afd51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:47 [async_llm.py:261] Added request cmpl-dc1c2dc4f83b4438a26bce60823afd51-0.
INFO 03-02 01:35:48 [logger.py:42] Received request cmpl-9cfe7a49cae74dc4be4ed98bcdad6c18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:48 [async_llm.py:261] Added request cmpl-9cfe7a49cae74dc4be4ed98bcdad6c18-0.
INFO 03-02 01:35:49 [logger.py:42] Received request cmpl-f85d55a82be14eeb86650e2a3a7302f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:49 [async_llm.py:261] Added request cmpl-f85d55a82be14eeb86650e2a3a7302f9-0.
INFO 03-02 01:35:51 [logger.py:42] Received request cmpl-dc3a7da35b884ee7a1db8c44be5d34e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:51 [async_llm.py:261] Added request cmpl-dc3a7da35b884ee7a1db8c44be5d34e4-0.
INFO 03-02 01:35:52 [logger.py:42] Received request cmpl-4e840e813a3e44e482f12479b267f1ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:52 [async_llm.py:261] Added request cmpl-4e840e813a3e44e482f12479b267f1ed-0.
INFO 03-02 01:35:53 [logger.py:42] Received request cmpl-2593c78c1dc44e629fbe2319f3af5a25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:53 [async_llm.py:261] Added request cmpl-2593c78c1dc44e629fbe2319f3af5a25-0.
INFO 03-02 01:35:54 [logger.py:42] Received request cmpl-fd9324594c3a4d02833882623be6937a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:54 [async_llm.py:261] Added request cmpl-fd9324594c3a4d02833882623be6937a-0.
INFO 03-02 01:35:55 [logger.py:42] Received request cmpl-23d632f132d341e5b610081a30d7c237-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:55 [async_llm.py:261] Added request cmpl-23d632f132d341e5b610081a30d7c237-0.
INFO 03-02 01:35:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:35:56 [logger.py:42] Received request cmpl-34c4ce622ae349f09da483d878208384-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:56 [async_llm.py:261] Added request cmpl-34c4ce622ae349f09da483d878208384-0.
INFO 03-02 01:35:57 [logger.py:42] Received request cmpl-4e6ca516275c4318aae904496c436a17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:57 [async_llm.py:261] Added request cmpl-4e6ca516275c4318aae904496c436a17-0.
INFO 03-02 01:35:58 [logger.py:42] Received request cmpl-70ed29697a4b45f29f2527c5b98d75f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:58 [async_llm.py:261] Added request cmpl-70ed29697a4b45f29f2527c5b98d75f6-0.
INFO 03-02 01:35:59 [logger.py:42] Received request cmpl-fd5720962ed7443a9e9ed78c40165200-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:59 [async_llm.py:261] Added request cmpl-fd5720962ed7443a9e9ed78c40165200-0.
INFO 03-02 01:36:00 [logger.py:42] Received request cmpl-55e2fccaca1244cb8cb60e452cd94952-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:00 [async_llm.py:261] Added request cmpl-55e2fccaca1244cb8cb60e452cd94952-0.
INFO 03-02 01:36:01 [logger.py:42] Received request cmpl-0b40269b217442138ad5129a068f559c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:01 [async_llm.py:261] Added request cmpl-0b40269b217442138ad5129a068f559c-0.
INFO 03-02 01:36:02 [logger.py:42] Received request cmpl-2d501561d6694a90a6885c43de085b6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:02 [async_llm.py:261] Added request cmpl-2d501561d6694a90a6885c43de085b6e-0.
INFO 03-02 01:36:04 [logger.py:42] Received request cmpl-2a883b263b3a486d81a5a19cd5559e2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:04 [async_llm.py:261] Added request cmpl-2a883b263b3a486d81a5a19cd5559e2d-0.
INFO 03-02 01:36:05 [logger.py:42] Received request cmpl-52c94b99eebf4b8fae6bba3dd5673c61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:05 [async_llm.py:261] Added request cmpl-52c94b99eebf4b8fae6bba3dd5673c61-0.
INFO 03-02 01:36:06 [logger.py:42] Received request cmpl-369d32ea73b3428c82154feb06bd061a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:06 [async_llm.py:261] Added request cmpl-369d32ea73b3428c82154feb06bd061a-0.
INFO 03-02 01:36:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:36:07 [logger.py:42] Received request cmpl-32d07651433949d2afed5026a6ce349d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:07 [async_llm.py:261] Added request cmpl-32d07651433949d2afed5026a6ce349d-0.
INFO 03-02 01:36:08 [logger.py:42] Received request cmpl-a9cb336c45314d898dd32c51e91e107e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:08 [async_llm.py:261] Added request cmpl-a9cb336c45314d898dd32c51e91e107e-0.
INFO 03-02 01:36:09 [logger.py:42] Received request cmpl-e3c9489b29e64952a972e7f91fc0f283-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:09 [async_llm.py:261] Added request cmpl-e3c9489b29e64952a972e7f91fc0f283-0.
INFO 03-02 01:36:10 [logger.py:42] Received request cmpl-58bfae1dcf0b49eeab6652675b9eedbc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:10 [async_llm.py:261] Added request cmpl-58bfae1dcf0b49eeab6652675b9eedbc-0.
INFO 03-02 01:36:11 [logger.py:42] Received request cmpl-2a0672c623924c9b80a9a5938a89f8f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:11 [async_llm.py:261] Added request cmpl-2a0672c623924c9b80a9a5938a89f8f1-0.
INFO 03-02 01:36:12 [logger.py:42] Received request cmpl-1b4d1d1ea2ce4d908ff02a55cdd8da3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:12 [async_llm.py:261] Added request cmpl-1b4d1d1ea2ce4d908ff02a55cdd8da3f-0.
INFO 03-02 01:36:13 [logger.py:42] Received request cmpl-83f8a4b09d564dbbbb058be9733f95a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:13 [async_llm.py:261] Added request cmpl-83f8a4b09d564dbbbb058be9733f95a5-0.
INFO 03-02 01:36:14 [logger.py:42] Received request cmpl-66c9cf6bd55240bd89336d2d01d48b08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:14 [async_llm.py:261] Added request cmpl-66c9cf6bd55240bd89336d2d01d48b08-0.
INFO 03-02 01:36:15 [logger.py:42] Received request cmpl-c0c169eb85a44e9fbd35b83b6dd5483b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:15 [async_llm.py:261] Added request cmpl-c0c169eb85a44e9fbd35b83b6dd5483b-0.
INFO 03-02 01:36:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:36:17 [logger.py:42] Received request cmpl-ed17b7254589405e9b2bff1e6118aa5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:17 [async_llm.py:261] Added request cmpl-ed17b7254589405e9b2bff1e6118aa5a-0.
INFO 03-02 01:36:18 [logger.py:42] Received request cmpl-6fffc82c6e014c639f8e2cc03681bc2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:18 [async_llm.py:261] Added request cmpl-6fffc82c6e014c639f8e2cc03681bc2a-0.
INFO 03-02 01:36:19 [logger.py:42] Received request cmpl-ccd8684b8a284b8cb81fce12eaca74db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:19 [async_llm.py:261] Added request cmpl-ccd8684b8a284b8cb81fce12eaca74db-0.
INFO 03-02 01:36:20 [logger.py:42] Received request cmpl-d57b7e5a0b1046bc96b16f644836f644-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:20 [async_llm.py:261] Added request cmpl-d57b7e5a0b1046bc96b16f644836f644-0.
INFO 03-02 01:36:21 [logger.py:42] Received request cmpl-0ff7bb0fbb1c4f3a952b3c8b7bf0a7ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:21 [async_llm.py:261] Added request cmpl-0ff7bb0fbb1c4f3a952b3c8b7bf0a7ee-0.
INFO 03-02 01:36:22 [logger.py:42] Received request cmpl-4603ef52ce9f4e4f8fedf87b26564809-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:22 [async_llm.py:261] Added request cmpl-4603ef52ce9f4e4f8fedf87b26564809-0.
INFO 03-02 01:36:23 [logger.py:42] Received request cmpl-9ed350681c1d4cf6be7a7f03050e160e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:23 [async_llm.py:261] Added request cmpl-9ed350681c1d4cf6be7a7f03050e160e-0.
INFO 03-02 01:36:24 [logger.py:42] Received request cmpl-136318966b9449a0a8a30003dbf6251e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:24 [async_llm.py:261] Added request cmpl-136318966b9449a0a8a30003dbf6251e-0.
INFO 03-02 01:36:25 [logger.py:42] Received request cmpl-06d20b7729a9472bafcf30a7603c8922-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:25 [async_llm.py:261] Added request cmpl-06d20b7729a9472bafcf30a7603c8922-0.
INFO 03-02 01:36:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:36:26 [logger.py:42] Received request cmpl-feec8408abf8413794fda61dba9440c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:26 [async_llm.py:261] Added request cmpl-feec8408abf8413794fda61dba9440c8-0.
INFO 03-02 01:36:27 [logger.py:42] Received request cmpl-08608f302c5b4092a380775d64345e0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:27 [async_llm.py:261] Added request cmpl-08608f302c5b4092a380775d64345e0d-0.
INFO 03-02 01:36:28 [logger.py:42] Received request cmpl-4e1554c5f45148f1984f5522d3db1ef4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:28 [async_llm.py:261] Added request cmpl-4e1554c5f45148f1984f5522d3db1ef4-0.
INFO 03-02 01:36:30 [logger.py:42] Received request cmpl-4c5d2e6bc7c2473682002f962acf7b72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:30 [async_llm.py:261] Added request cmpl-4c5d2e6bc7c2473682002f962acf7b72-0.
INFO 03-02 01:36:31 [logger.py:42] Received request cmpl-34cfe62a5bc4456b84e1348a9d0b4004-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:31 [async_llm.py:261] Added request cmpl-34cfe62a5bc4456b84e1348a9d0b4004-0.
INFO 03-02 01:36:32 [logger.py:42] Received request cmpl-62a1a0d321ae4307a8fe03449bf99aff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:32 [async_llm.py:261] Added request cmpl-62a1a0d321ae4307a8fe03449bf99aff-0.
INFO 03-02 01:36:33 [logger.py:42] Received request cmpl-91458a41a59a43318ac4e418644b4dd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:33 [async_llm.py:261] Added request cmpl-91458a41a59a43318ac4e418644b4dd8-0.
INFO 03-02 01:36:34 [logger.py:42] Received request cmpl-b36da26ca8ab4d94ba237c689adaf7d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:34 [async_llm.py:261] Added request cmpl-b36da26ca8ab4d94ba237c689adaf7d1-0.
INFO 03-02 01:36:35 [logger.py:42] Received request cmpl-c33d6a6a13994048b8618f75efe5fc26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:35 [async_llm.py:261] Added request cmpl-c33d6a6a13994048b8618f75efe5fc26-0.
INFO 03-02 01:36:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:36:36 [logger.py:42] Received request cmpl-b1bc3195e26146fd8c04e2fe249c55a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:36 [async_llm.py:261] Added request cmpl-b1bc3195e26146fd8c04e2fe249c55a8-0.
INFO 03-02 01:36:37 [logger.py:42] Received request cmpl-2f791b4150844134a9122407216c1979-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:37 [async_llm.py:261] Added request cmpl-2f791b4150844134a9122407216c1979-0.
INFO 03-02 01:36:38 [logger.py:42] Received request cmpl-a4741e86910b40f3ac2a6182d1f95cb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:38 [async_llm.py:261] Added request cmpl-a4741e86910b40f3ac2a6182d1f95cb5-0.
INFO 03-02 01:36:39 [logger.py:42] Received request cmpl-388d5a29d7dd4b939dd3191fcc56872c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:39 [async_llm.py:261] Added request cmpl-388d5a29d7dd4b939dd3191fcc56872c-0.
INFO 03-02 01:36:40 [logger.py:42] Received request cmpl-a23e3b1ff58548a6b8d681808958ef03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:40 [async_llm.py:261] Added request cmpl-a23e3b1ff58548a6b8d681808958ef03-0.
INFO 03-02 01:36:41 [logger.py:42] Received request cmpl-21b2436398f040c2b89f2ce281dfe7e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:41 [async_llm.py:261] Added request cmpl-21b2436398f040c2b89f2ce281dfe7e0-0.
INFO 03-02 01:36:43 [logger.py:42] Received request cmpl-a75756ac90e24b358adb5eca73ed77e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:43 [async_llm.py:261] Added request cmpl-a75756ac90e24b358adb5eca73ed77e8-0.
INFO 03-02 01:36:44 [logger.py:42] Received request cmpl-8fd0d81ffb014bafb8b8d9af428d30b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:44 [async_llm.py:261] Added request cmpl-8fd0d81ffb014bafb8b8d9af428d30b5-0.
INFO 03-02 01:36:45 [logger.py:42] Received request cmpl-d5b7fa14dce841118db24cb7ad9e32a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:45 [async_llm.py:261] Added request cmpl-d5b7fa14dce841118db24cb7ad9e32a6-0.
INFO 03-02 01:36:46 [logger.py:42] Received request cmpl-9aea8de786344ec6881eb239bc61ce44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:46 [async_llm.py:261] Added request cmpl-9aea8de786344ec6881eb239bc61ce44-0.
INFO 03-02 01:36:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:36:47 [logger.py:42] Received request cmpl-dd2a9715009f4e3c91e049043995c0f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:47 [async_llm.py:261] Added request cmpl-dd2a9715009f4e3c91e049043995c0f3-0.
INFO 03-02 01:36:48 [logger.py:42] Received request cmpl-8b346d9597ad47e0945b8c2bba6748d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:48 [async_llm.py:261] Added request cmpl-8b346d9597ad47e0945b8c2bba6748d2-0.
INFO 03-02 01:36:49 [logger.py:42] Received request cmpl-d0824fb9aea64cb5b4245cbe85fb7349-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:49 [async_llm.py:261] Added request cmpl-d0824fb9aea64cb5b4245cbe85fb7349-0.
INFO 03-02 01:36:50 [logger.py:42] Received request cmpl-61727c8c75584c0eab74977f24e26ba8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:50 [async_llm.py:261] Added request cmpl-61727c8c75584c0eab74977f24e26ba8-0.
INFO 03-02 01:36:51 [logger.py:42] Received request cmpl-2bcf63afa8424479baeadde7f73b1a4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:51 [async_llm.py:261] Added request cmpl-2bcf63afa8424479baeadde7f73b1a4d-0.
INFO 03-02 01:36:52 [logger.py:42] Received request cmpl-392f0c28e85347db92424ab10b5c4e2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:52 [async_llm.py:261] Added request cmpl-392f0c28e85347db92424ab10b5c4e2c-0.
INFO 03-02 01:36:53 [logger.py:42] Received request cmpl-e36cddf025964318b5e71325f199600e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:53 [async_llm.py:261] Added request cmpl-e36cddf025964318b5e71325f199600e-0.
INFO 03-02 01:36:54 [logger.py:42] Received request cmpl-e13ed7d3965c4fe2afe0bce96f2c1445-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:54 [async_llm.py:261] Added request cmpl-e13ed7d3965c4fe2afe0bce96f2c1445-0.
INFO 03-02 01:36:56 [logger.py:42] Received request cmpl-c8602080fbe74e63a57eb1ac182612ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:56 [async_llm.py:261] Added request cmpl-c8602080fbe74e63a57eb1ac182612ce-0.
INFO 03-02 01:36:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:36:57 [logger.py:42] Received request cmpl-b9d3f4d33c7c41ab9b84e04a9d013426-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:57 [async_llm.py:261] Added request cmpl-b9d3f4d33c7c41ab9b84e04a9d013426-0.
INFO 03-02 01:36:58 [logger.py:42] Received request cmpl-a02cf1f72b5c4e6c96575872fba418c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:58 [async_llm.py:261] Added request cmpl-a02cf1f72b5c4e6c96575872fba418c8-0.
INFO 03-02 01:36:59 [logger.py:42] Received request cmpl-00b91bbcae4f42b696fd5ea6466c8ba1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:59 [async_llm.py:261] Added request cmpl-00b91bbcae4f42b696fd5ea6466c8ba1-0.
INFO 03-02 01:37:00 [logger.py:42] Received request cmpl-6183219de6aa4a7cb03d3b6be0b1b93f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:00 [async_llm.py:261] Added request cmpl-6183219de6aa4a7cb03d3b6be0b1b93f-0.
INFO 03-02 01:37:01 [logger.py:42] Received request cmpl-94da6cb1644444858c2957b702543e0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:01 [async_llm.py:261] Added request cmpl-94da6cb1644444858c2957b702543e0e-0.
INFO 03-02 01:37:02 [logger.py:42] Received request cmpl-35dcc0c1ebe0472dbefbdabd39722bc8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:02 [async_llm.py:261] Added request cmpl-35dcc0c1ebe0472dbefbdabd39722bc8-0.
INFO 03-02 01:37:03 [logger.py:42] Received request cmpl-e37784b4ffcd441796fc0ba70b356af1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:03 [async_llm.py:261] Added request cmpl-e37784b4ffcd441796fc0ba70b356af1-0.
INFO 03-02 01:37:04 [logger.py:42] Received request cmpl-c0a926fead9b4840a97ff26a293fb1ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:04 [async_llm.py:261] Added request cmpl-c0a926fead9b4840a97ff26a293fb1ee-0.
INFO 03-02 01:37:05 [logger.py:42] Received request cmpl-e29a8f4f91e34717b4d75511409ee6bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:05 [async_llm.py:261] Added request cmpl-e29a8f4f91e34717b4d75511409ee6bf-0.
INFO 03-02 01:37:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:37:06 [logger.py:42] Received request cmpl-bfc43fecf19a47f0b9ef509a0b2e89d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:06 [async_llm.py:261] Added request cmpl-bfc43fecf19a47f0b9ef509a0b2e89d7-0.
INFO 03-02 01:37:08 [logger.py:42] Received request cmpl-7bdf608186dd4ac6b8f0ba910ebc7ddd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:08 [async_llm.py:261] Added request cmpl-7bdf608186dd4ac6b8f0ba910ebc7ddd-0.
INFO 03-02 01:37:09 [logger.py:42] Received request cmpl-5ac2b7488bac44cc8d704361714a7c88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:09 [async_llm.py:261] Added request cmpl-5ac2b7488bac44cc8d704361714a7c88-0.
INFO 03-02 01:37:10 [logger.py:42] Received request cmpl-f47e2212370a472ea2c6843b12b35acf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:10 [async_llm.py:261] Added request cmpl-f47e2212370a472ea2c6843b12b35acf-0.
INFO 03-02 01:37:11 [logger.py:42] Received request cmpl-882ab69ed4274ca8a811aa1d413e9bf9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:11 [async_llm.py:261] Added request cmpl-882ab69ed4274ca8a811aa1d413e9bf9-0.
INFO 03-02 01:37:12 [logger.py:42] Received request cmpl-2e5abf0553ae4ef19a88f0f50a69f785-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:12 [async_llm.py:261] Added request cmpl-2e5abf0553ae4ef19a88f0f50a69f785-0.
INFO 03-02 01:37:13 [logger.py:42] Received request cmpl-51177074c6a34859a6a80d7185c24b49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:13 [async_llm.py:261] Added request cmpl-51177074c6a34859a6a80d7185c24b49-0.
INFO 03-02 01:37:14 [logger.py:42] Received request cmpl-5d3e4063a39a4a22bd6d006ecd6b4e26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:14 [async_llm.py:261] Added request cmpl-5d3e4063a39a4a22bd6d006ecd6b4e26-0.
INFO 03-02 01:37:15 [logger.py:42] Received request cmpl-e95d1810b8c9433a81ba24227ccb2133-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:15 [async_llm.py:261] Added request cmpl-e95d1810b8c9433a81ba24227ccb2133-0.
INFO 03-02 01:37:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:37:16 [logger.py:42] Received request cmpl-70f1a13b9b4c4962b5aad44264c476be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:16 [async_llm.py:261] Added request cmpl-70f1a13b9b4c4962b5aad44264c476be-0.
INFO 03-02 01:37:17 [logger.py:42] Received request cmpl-7b7d20d915fd41719972031f6a6da474-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:17 [async_llm.py:261] Added request cmpl-7b7d20d915fd41719972031f6a6da474-0.
INFO 03-02 01:37:18 [logger.py:42] Received request cmpl-aaca25a089064bc6a70e80d9880aacb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:18 [async_llm.py:261] Added request cmpl-aaca25a089064bc6a70e80d9880aacb5-0.
INFO 03-02 01:37:19 [logger.py:42] Received request cmpl-23a7121ca441454884ec6eae603f722a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:19 [async_llm.py:261] Added request cmpl-23a7121ca441454884ec6eae603f722a-0.
INFO 03-02 01:37:21 [logger.py:42] Received request cmpl-8fee0fc1f5974eda882e4ffac4f7401f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:21 [async_llm.py:261] Added request cmpl-8fee0fc1f5974eda882e4ffac4f7401f-0.
INFO 03-02 01:37:22 [logger.py:42] Received request cmpl-0e083a597a154bf791fd622102c2f6b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:22 [async_llm.py:261] Added request cmpl-0e083a597a154bf791fd622102c2f6b3-0.
INFO 03-02 01:37:23 [logger.py:42] Received request cmpl-a92dee37313f4f3c983814f22d23e9a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:23 [async_llm.py:261] Added request cmpl-a92dee37313f4f3c983814f22d23e9a6-0.
INFO 03-02 01:37:24 [logger.py:42] Received request cmpl-0728032489ff4cacbf9978b5f3219158-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:24 [async_llm.py:261] Added request cmpl-0728032489ff4cacbf9978b5f3219158-0.
INFO 03-02 01:37:25 [logger.py:42] Received request cmpl-496a13164dae4e70b62c5aac03bb7136-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:25 [async_llm.py:261] Added request cmpl-496a13164dae4e70b62c5aac03bb7136-0.
INFO 03-02 01:37:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:37:26 [logger.py:42] Received request cmpl-fa1218474ee14e15998a926fe313fc88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:26 [async_llm.py:261] Added request cmpl-fa1218474ee14e15998a926fe313fc88-0.
INFO 03-02 01:37:27 [logger.py:42] Received request cmpl-d023c9afd5a84643b85420ecefe6ae82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:27 [async_llm.py:261] Added request cmpl-d023c9afd5a84643b85420ecefe6ae82-0.
INFO 03-02 01:37:28 [logger.py:42] Received request cmpl-2711ea45386b42128b9d9bc04973e83e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:28 [async_llm.py:261] Added request cmpl-2711ea45386b42128b9d9bc04973e83e-0.
INFO 03-02 01:37:29 [logger.py:42] Received request cmpl-6d9fa03c6cd04fcbbe329c78cc04446b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:29 [async_llm.py:261] Added request cmpl-6d9fa03c6cd04fcbbe329c78cc04446b-0.
INFO 03-02 01:37:30 [logger.py:42] Received request cmpl-44d1569fde854c55a2cde1e8cbdb7007-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:30 [async_llm.py:261] Added request cmpl-44d1569fde854c55a2cde1e8cbdb7007-0.
INFO 03-02 01:37:31 [logger.py:42] Received request cmpl-916148e25be94cecb023533d90540591-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:31 [async_llm.py:261] Added request cmpl-916148e25be94cecb023533d90540591-0.
INFO 03-02 01:37:32 [logger.py:42] Received request cmpl-4859f2bdf0804cb28f975bc91482d45f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:32 [async_llm.py:261] Added request cmpl-4859f2bdf0804cb28f975bc91482d45f-0.
INFO 03-02 01:37:34 [logger.py:42] Received request cmpl-469fdb88ef7843098e454542cb0154db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:34 [async_llm.py:261] Added request cmpl-469fdb88ef7843098e454542cb0154db-0.
INFO 03-02 01:37:35 [logger.py:42] Received request cmpl-1fa4ca9bb71a4e718b0cbb98cc4b3d48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:35 [async_llm.py:261] Added request cmpl-1fa4ca9bb71a4e718b0cbb98cc4b3d48-0.
INFO 03-02 01:37:36 [logger.py:42] Received request cmpl-94f3f04d56c64f64959c4b2ecd2bf9e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:36 [async_llm.py:261] Added request cmpl-94f3f04d56c64f64959c4b2ecd2bf9e7-0.
INFO 03-02 01:37:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:37:37 [logger.py:42] Received request cmpl-f8dc6f1758804d6b94266f41885fc57e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:37 [async_llm.py:261] Added request cmpl-f8dc6f1758804d6b94266f41885fc57e-0.
INFO 03-02 01:37:38 [logger.py:42] Received request cmpl-40c6d61c48ae4eb98adc6a739385f94d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:38 [async_llm.py:261] Added request cmpl-40c6d61c48ae4eb98adc6a739385f94d-0.
INFO 03-02 01:37:39 [logger.py:42] Received request cmpl-fa0363b580454be1b1283d43215e832c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:39 [async_llm.py:261] Added request cmpl-fa0363b580454be1b1283d43215e832c-0.
INFO 03-02 01:37:40 [logger.py:42] Received request cmpl-4b0a63b2d3494364a494e548c03ec763-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:40 [async_llm.py:261] Added request cmpl-4b0a63b2d3494364a494e548c03ec763-0.
INFO 03-02 01:37:41 [logger.py:42] Received request cmpl-3fea5e14de424bacb6dbbb108af56efa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:41 [async_llm.py:261] Added request cmpl-3fea5e14de424bacb6dbbb108af56efa-0.
INFO 03-02 01:37:42 [logger.py:42] Received request cmpl-9340e14069994d558842c1d403a16e70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:42 [async_llm.py:261] Added request cmpl-9340e14069994d558842c1d403a16e70-0.
INFO 03-02 01:37:43 [logger.py:42] Received request cmpl-0cd596b3ac474f7f8bb59bb2efdb2333-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:43 [async_llm.py:261] Added request cmpl-0cd596b3ac474f7f8bb59bb2efdb2333-0.
INFO 03-02 01:37:44 [logger.py:42] Received request cmpl-c3d1cab72246454aa5c031229f74ec4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:44 [async_llm.py:261] Added request cmpl-c3d1cab72246454aa5c031229f74ec4d-0.
INFO 03-02 01:37:45 [logger.py:42] Received request cmpl-f3c74c04815848539f3685987530339f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:45 [async_llm.py:261] Added request cmpl-f3c74c04815848539f3685987530339f-0.
INFO 03-02 01:37:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:37:47 [logger.py:42] Received request cmpl-0f740cf24b49459aab020a52adb52897-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:47 [async_llm.py:261] Added request cmpl-0f740cf24b49459aab020a52adb52897-0.
INFO 03-02 01:37:48 [logger.py:42] Received request cmpl-3652a752ec4b4f6183fea3e0b3ecf285-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:48 [async_llm.py:261] Added request cmpl-3652a752ec4b4f6183fea3e0b3ecf285-0.
INFO 03-02 01:37:49 [logger.py:42] Received request cmpl-fe9489c5aec34446b320f3fa158a573d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:49 [async_llm.py:261] Added request cmpl-fe9489c5aec34446b320f3fa158a573d-0.
INFO 03-02 01:37:50 [logger.py:42] Received request cmpl-b2a58ee1f1fe49c4afb20e1c624b9599-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:50 [async_llm.py:261] Added request cmpl-b2a58ee1f1fe49c4afb20e1c624b9599-0.
INFO 03-02 01:37:51 [logger.py:42] Received request cmpl-30984c55064144209b0282460b5fe403-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:51 [async_llm.py:261] Added request cmpl-30984c55064144209b0282460b5fe403-0.
INFO 03-02 01:37:52 [logger.py:42] Received request cmpl-33fd0c74794742b7a3ca28eeb03a7c3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:52 [async_llm.py:261] Added request cmpl-33fd0c74794742b7a3ca28eeb03a7c3c-0.
INFO 03-02 01:37:53 [logger.py:42] Received request cmpl-0e45353f36e74215af0831f4f5f248bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:53 [async_llm.py:261] Added request cmpl-0e45353f36e74215af0831f4f5f248bc-0.
INFO 03-02 01:37:54 [logger.py:42] Received request cmpl-7a19383529aa4921bcf6070964441432-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:54 [async_llm.py:261] Added request cmpl-7a19383529aa4921bcf6070964441432-0.
INFO 03-02 01:37:55 [logger.py:42] Received request cmpl-cac21fa0f6e24467ba6771a6b1132029-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:55 [async_llm.py:261] Added request cmpl-cac21fa0f6e24467ba6771a6b1132029-0.
INFO 03-02 01:37:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:37:56 [logger.py:42] Received request cmpl-ad83e7eca6ba4d96a0e00639788ff0dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:56 [async_llm.py:261] Added request cmpl-ad83e7eca6ba4d96a0e00639788ff0dc-0.
INFO 03-02 01:37:57 [logger.py:42] Received request cmpl-673d187dc9f94a1697b5ccb492993aee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:57 [async_llm.py:261] Added request cmpl-673d187dc9f94a1697b5ccb492993aee-0.
INFO 03-02 01:37:58 [logger.py:42] Received request cmpl-54a97afbcfee46a69118e03081e12532-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:58 [async_llm.py:261] Added request cmpl-54a97afbcfee46a69118e03081e12532-0.
INFO 03-02 01:38:00 [logger.py:42] Received request cmpl-20ab9a2a6ef24953a0f5849d1550b6fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:00 [async_llm.py:261] Added request cmpl-20ab9a2a6ef24953a0f5849d1550b6fc-0.
INFO 03-02 01:38:01 [logger.py:42] Received request cmpl-0c7a47a8b3e04c429e61cc92f526dd49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:01 [async_llm.py:261] Added request cmpl-0c7a47a8b3e04c429e61cc92f526dd49-0.
INFO 03-02 01:38:02 [logger.py:42] Received request cmpl-38c7661aa3984222adadde493c75bd1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:02 [async_llm.py:261] Added request cmpl-38c7661aa3984222adadde493c75bd1f-0.
INFO 03-02 01:38:03 [logger.py:42] Received request cmpl-70c9d5d14391489cbe74af88b19cb2cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:03 [async_llm.py:261] Added request cmpl-70c9d5d14391489cbe74af88b19cb2cf-0.
INFO 03-02 01:38:04 [logger.py:42] Received request cmpl-b2ef2e2ed3604fe0815b811703433400-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:04 [async_llm.py:261] Added request cmpl-b2ef2e2ed3604fe0815b811703433400-0.
INFO 03-02 01:38:05 [logger.py:42] Received request cmpl-89d572c3673c4d45b6cf16705b3244ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:05 [async_llm.py:261] Added request cmpl-89d572c3673c4d45b6cf16705b3244ba-0.
INFO 03-02 01:38:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:38:06 [logger.py:42] Received request cmpl-7d1c867abac44d30aee998f9c1ab3cf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:06 [async_llm.py:261] Added request cmpl-7d1c867abac44d30aee998f9c1ab3cf4-0.
INFO 03-02 01:38:07 [logger.py:42] Received request cmpl-62410a171606493eb8a67b194e334c94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:07 [async_llm.py:261] Added request cmpl-62410a171606493eb8a67b194e334c94-0.
INFO 03-02 01:38:08 [logger.py:42] Received request cmpl-9af770ab421444efb444999cb3147037-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:08 [async_llm.py:261] Added request cmpl-9af770ab421444efb444999cb3147037-0.
INFO 03-02 01:38:09 [logger.py:42] Received request cmpl-a9ce036f563b4a4b88976274aabede10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:09 [async_llm.py:261] Added request cmpl-a9ce036f563b4a4b88976274aabede10-0.
INFO 03-02 01:38:10 [logger.py:42] Received request cmpl-8c83f13df5b14a26bc802373d0cd2f4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:10 [async_llm.py:261] Added request cmpl-8c83f13df5b14a26bc802373d0cd2f4f-0.
INFO 03-02 01:38:11 [logger.py:42] Received request cmpl-01b68b49d6a14e309473c4e2b92dfc4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:11 [async_llm.py:261] Added request cmpl-01b68b49d6a14e309473c4e2b92dfc4b-0.
INFO 03-02 01:38:13 [logger.py:42] Received request cmpl-62bdcf3a888a4a8b87ee029b68f91c10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:13 [async_llm.py:261] Added request cmpl-62bdcf3a888a4a8b87ee029b68f91c10-0.
INFO 03-02 01:38:14 [logger.py:42] Received request cmpl-b6d9b176f28049af8ca731df0a83e7ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:14 [async_llm.py:261] Added request cmpl-b6d9b176f28049af8ca731df0a83e7ba-0.
INFO 03-02 01:38:15 [logger.py:42] Received request cmpl-1dcda43591cb4fda822c752ad24130d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:15 [async_llm.py:261] Added request cmpl-1dcda43591cb4fda822c752ad24130d2-0.
INFO 03-02 01:38:16 [logger.py:42] Received request cmpl-be0d847517074d4297d82b84a2a6d13c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:16 [async_llm.py:261] Added request cmpl-be0d847517074d4297d82b84a2a6d13c-0.
INFO 03-02 01:38:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:38:17 [logger.py:42] Received request cmpl-3564fd9059bc4d06829744a5f02b66db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:17 [async_llm.py:261] Added request cmpl-3564fd9059bc4d06829744a5f02b66db-0.
INFO 03-02 01:38:18 [logger.py:42] Received request cmpl-4d369738cba04c98968488c55f44cd95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:18 [async_llm.py:261] Added request cmpl-4d369738cba04c98968488c55f44cd95-0.
INFO 03-02 01:38:19 [logger.py:42] Received request cmpl-df38b4c1744047e0befd21b0308a36b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:19 [async_llm.py:261] Added request cmpl-df38b4c1744047e0befd21b0308a36b9-0.
INFO 03-02 01:38:20 [logger.py:42] Received request cmpl-471a1b35576d43558d3464df1c616338-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:20 [async_llm.py:261] Added request cmpl-471a1b35576d43558d3464df1c616338-0.
INFO 03-02 01:38:21 [logger.py:42] Received request cmpl-2322b90f56324fc692bc68ec5a73f86c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:21 [async_llm.py:261] Added request cmpl-2322b90f56324fc692bc68ec5a73f86c-0.
INFO 03-02 01:38:22 [logger.py:42] Received request cmpl-17586e03bd5a4aaeb14723877a6c95d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:22 [async_llm.py:261] Added request cmpl-17586e03bd5a4aaeb14723877a6c95d8-0.
INFO 03-02 01:38:23 [logger.py:42] Received request cmpl-92f065fea316442ca1987b5761e30c61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:23 [async_llm.py:261] Added request cmpl-92f065fea316442ca1987b5761e30c61-0.
INFO 03-02 01:38:24 [logger.py:42] Received request cmpl-48fda4de1119425380ca8ad6987e313c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:25 [async_llm.py:261] Added request cmpl-48fda4de1119425380ca8ad6987e313c-0.
INFO 03-02 01:38:26 [logger.py:42] Received request cmpl-6e2b4433ea0e49e7b10727479786222e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:26 [async_llm.py:261] Added request cmpl-6e2b4433ea0e49e7b10727479786222e-0.
INFO 03-02 01:38:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:38:27 [logger.py:42] Received request cmpl-ec1e512bcaf84bc6a581c397624107c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:27 [async_llm.py:261] Added request cmpl-ec1e512bcaf84bc6a581c397624107c2-0.
INFO 03-02 01:38:28 [logger.py:42] Received request cmpl-1e6db516126547e1a3a303d720c9efdc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:28 [async_llm.py:261] Added request cmpl-1e6db516126547e1a3a303d720c9efdc-0.
INFO 03-02 01:38:29 [logger.py:42] Received request cmpl-245fcde533ee4caeae72876c036727c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:29 [async_llm.py:261] Added request cmpl-245fcde533ee4caeae72876c036727c4-0.
INFO 03-02 01:38:30 [logger.py:42] Received request cmpl-b1c53d35beac4a9d96e602b76be9b2cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:30 [async_llm.py:261] Added request cmpl-b1c53d35beac4a9d96e602b76be9b2cc-0.
INFO 03-02 01:38:31 [logger.py:42] Received request cmpl-2384f776693143d18eda420df860b6a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:31 [async_llm.py:261] Added request cmpl-2384f776693143d18eda420df860b6a0-0.
INFO 03-02 01:38:32 [logger.py:42] Received request cmpl-a08ee1840b55411e9002a2aade7d0d46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:32 [async_llm.py:261] Added request cmpl-a08ee1840b55411e9002a2aade7d0d46-0.
INFO 03-02 01:38:33 [logger.py:42] Received request cmpl-8379b59bf3634a3988ca35a697537533-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:33 [async_llm.py:261] Added request cmpl-8379b59bf3634a3988ca35a697537533-0.
INFO 03-02 01:38:34 [logger.py:42] Received request cmpl-419b23a80863478089a57cb133b931c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:34 [async_llm.py:261] Added request cmpl-419b23a80863478089a57cb133b931c4-0.
INFO 03-02 01:38:35 [logger.py:42] Received request cmpl-f1c7d7f1292f48c4bbeb200998505c72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:35 [async_llm.py:261] Added request cmpl-f1c7d7f1292f48c4bbeb200998505c72-0.
INFO 03-02 01:38:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:38:36 [logger.py:42] Received request cmpl-71811a02c40d45949b38131a9bd2ab6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:36 [async_llm.py:261] Added request cmpl-71811a02c40d45949b38131a9bd2ab6f-0.
INFO 03-02 01:38:38 [logger.py:42] Received request cmpl-20b652657cda44ec83f5a0f6b65586ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:38 [async_llm.py:261] Added request cmpl-20b652657cda44ec83f5a0f6b65586ea-0.
INFO 03-02 01:38:39 [logger.py:42] Received request cmpl-77516e878068432eac1351cbfe7831e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:39 [async_llm.py:261] Added request cmpl-77516e878068432eac1351cbfe7831e4-0.
INFO 03-02 01:38:40 [logger.py:42] Received request cmpl-7497039bc43340eb8716ccf6e09398ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:40 [async_llm.py:261] Added request cmpl-7497039bc43340eb8716ccf6e09398ab-0.
INFO 03-02 01:38:41 [logger.py:42] Received request cmpl-a8411f1317804b76990c87de6033f959-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:41 [async_llm.py:261] Added request cmpl-a8411f1317804b76990c87de6033f959-0.
INFO 03-02 01:38:42 [logger.py:42] Received request cmpl-f9b5a6b732ba4b2a8588411a30490976-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:42 [async_llm.py:261] Added request cmpl-f9b5a6b732ba4b2a8588411a30490976-0.
INFO 03-02 01:38:43 [logger.py:42] Received request cmpl-9fc05bc879dd4730839488e40e082af9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:43 [async_llm.py:261] Added request cmpl-9fc05bc879dd4730839488e40e082af9-0.
INFO 03-02 01:38:44 [logger.py:42] Received request cmpl-a82cdf86825e480bbd6e8d50c830db82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:44 [async_llm.py:261] Added request cmpl-a82cdf86825e480bbd6e8d50c830db82-0.
INFO 03-02 01:38:45 [logger.py:42] Received request cmpl-14c79ca799f142e3b8bb8d4a6e785668-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:45 [async_llm.py:261] Added request cmpl-14c79ca799f142e3b8bb8d4a6e785668-0.
INFO 03-02 01:38:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:38:46 [logger.py:42] Received request cmpl-cc64590ac8a44092bb7a211178aae580-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:46 [async_llm.py:261] Added request cmpl-cc64590ac8a44092bb7a211178aae580-0.
INFO 03-02 01:38:47 [logger.py:42] Received request cmpl-24f5b33fee964a2789f6b04260417180-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:47 [async_llm.py:261] Added request cmpl-24f5b33fee964a2789f6b04260417180-0.
INFO 03-02 01:38:48 [logger.py:42] Received request cmpl-1610851d87c94afab39846f2c161dc98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:48 [async_llm.py:261] Added request cmpl-1610851d87c94afab39846f2c161dc98-0.
INFO 03-02 01:38:49 [logger.py:42] Received request cmpl-1070003fa65e4a57aafa8dee3f028dff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:49 [async_llm.py:261] Added request cmpl-1070003fa65e4a57aafa8dee3f028dff-0.
INFO 03-02 01:38:51 [logger.py:42] Received request cmpl-fd821855f18945e6a0338d571275f15f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:51 [async_llm.py:261] Added request cmpl-fd821855f18945e6a0338d571275f15f-0.
INFO 03-02 01:38:52 [logger.py:42] Received request cmpl-3df237952baf4e39b1f94267ef7c7033-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:52 [async_llm.py:261] Added request cmpl-3df237952baf4e39b1f94267ef7c7033-0.
INFO 03-02 01:38:53 [logger.py:42] Received request cmpl-0b1f20be91414588865dacba5aec0111-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:53 [async_llm.py:261] Added request cmpl-0b1f20be91414588865dacba5aec0111-0.
INFO 03-02 01:38:54 [logger.py:42] Received request cmpl-d486bf9a9ec448ed9223ab9a714e5ef5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:54 [async_llm.py:261] Added request cmpl-d486bf9a9ec448ed9223ab9a714e5ef5-0.
INFO 03-02 01:38:55 [logger.py:42] Received request cmpl-429da93cddfd4a158b80a30075a49b9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:55 [async_llm.py:261] Added request cmpl-429da93cddfd4a158b80a30075a49b9b-0.
INFO 03-02 01:38:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:38:56 [logger.py:42] Received request cmpl-cc59c029634343f080e8f3f09d295280-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:56 [async_llm.py:261] Added request cmpl-cc59c029634343f080e8f3f09d295280-0.
INFO 03-02 01:38:57 [logger.py:42] Received request cmpl-cf707935f3804da685beef57ac00a2a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:57 [async_llm.py:261] Added request cmpl-cf707935f3804da685beef57ac00a2a1-0.
INFO 03-02 01:38:58 [logger.py:42] Received request cmpl-3f400828754a410fbe9e3a9e5e817fb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:58 [async_llm.py:261] Added request cmpl-3f400828754a410fbe9e3a9e5e817fb5-0.
INFO 03-02 01:38:59 [logger.py:42] Received request cmpl-cc376590a6c348d88b287b0d19919ecf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:59 [async_llm.py:261] Added request cmpl-cc376590a6c348d88b287b0d19919ecf-0.
INFO 03-02 01:39:00 [logger.py:42] Received request cmpl-50b30268cb154c35b24d0eb1bf6b338b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:00 [async_llm.py:261] Added request cmpl-50b30268cb154c35b24d0eb1bf6b338b-0.
INFO 03-02 01:39:01 [logger.py:42] Received request cmpl-d193cd95007f4f1d96e1968cfc15b6f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:01 [async_llm.py:261] Added request cmpl-d193cd95007f4f1d96e1968cfc15b6f0-0.
INFO 03-02 01:39:02 [logger.py:42] Received request cmpl-e8222639de88406d88338198b11c8da0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:02 [async_llm.py:261] Added request cmpl-e8222639de88406d88338198b11c8da0-0.
INFO 03-02 01:39:04 [logger.py:42] Received request cmpl-09ada04838cc4a12b94de65d6d30628c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:04 [async_llm.py:261] Added request cmpl-09ada04838cc4a12b94de65d6d30628c-0.
INFO 03-02 01:39:05 [logger.py:42] Received request cmpl-fa60c1872d1f4094b6869fa97d7b6c1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:05 [async_llm.py:261] Added request cmpl-fa60c1872d1f4094b6869fa97d7b6c1a-0.
INFO 03-02 01:39:06 [logger.py:42] Received request cmpl-efe05e93ce214645983613c65f9caebb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:06 [async_llm.py:261] Added request cmpl-efe05e93ce214645983613c65f9caebb-0.
INFO 03-02 01:39:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:39:07 [logger.py:42] Received request cmpl-39198423db6f4a20b8fe9a7ca279e900-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:07 [async_llm.py:261] Added request cmpl-39198423db6f4a20b8fe9a7ca279e900-0.
INFO 03-02 01:39:08 [logger.py:42] Received request cmpl-bff5bcd0d48a4d66b2c44411123db67d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:08 [async_llm.py:261] Added request cmpl-bff5bcd0d48a4d66b2c44411123db67d-0.
INFO 03-02 01:39:09 [logger.py:42] Received request cmpl-488640a552004df09304efd3b19fe581-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:09 [async_llm.py:261] Added request cmpl-488640a552004df09304efd3b19fe581-0.
INFO 03-02 01:39:10 [logger.py:42] Received request cmpl-49e3f07cf44a405b923d6101e25f9762-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:10 [async_llm.py:261] Added request cmpl-49e3f07cf44a405b923d6101e25f9762-0.
INFO 03-02 01:39:11 [logger.py:42] Received request cmpl-5acace1a93834edd8336d4ebe6d39cae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:11 [async_llm.py:261] Added request cmpl-5acace1a93834edd8336d4ebe6d39cae-0.
INFO 03-02 01:39:12 [logger.py:42] Received request cmpl-1bf780ab9ff041e0b4498f3d67d45dea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:12 [async_llm.py:261] Added request cmpl-1bf780ab9ff041e0b4498f3d67d45dea-0.
INFO 03-02 01:39:13 [logger.py:42] Received request cmpl-ad83cd68fefc423c9a4e4ae5de9cccfd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:13 [async_llm.py:261] Added request cmpl-ad83cd68fefc423c9a4e4ae5de9cccfd-0.
INFO 03-02 01:39:14 [logger.py:42] Received request cmpl-aad71a7891b94b8a967793fad4ea2589-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:14 [async_llm.py:261] Added request cmpl-aad71a7891b94b8a967793fad4ea2589-0.
INFO 03-02 01:39:15 [logger.py:42] Received request cmpl-eff244a11c5e43b9b8b3ec930fe5b0d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:15 [async_llm.py:261] Added request cmpl-eff244a11c5e43b9b8b3ec930fe5b0d1-0.
INFO 03-02 01:39:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:39:17 [logger.py:42] Received request cmpl-fed139ab517a4bd6b066c28672ece6a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:17 [async_llm.py:261] Added request cmpl-fed139ab517a4bd6b066c28672ece6a0-0.
INFO 03-02 01:39:18 [logger.py:42] Received request cmpl-08b8c8640e8f4d27b4513c8f99cfde40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:18 [async_llm.py:261] Added request cmpl-08b8c8640e8f4d27b4513c8f99cfde40-0.
INFO 03-02 01:39:19 [logger.py:42] Received request cmpl-4d87ea14e8d84bf5a9a290afb2c552d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:19 [async_llm.py:261] Added request cmpl-4d87ea14e8d84bf5a9a290afb2c552d1-0.
INFO 03-02 01:39:20 [logger.py:42] Received request cmpl-4164f94e4c7045438f3cc00acb739cff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:20 [async_llm.py:261] Added request cmpl-4164f94e4c7045438f3cc00acb739cff-0.
INFO 03-02 01:39:21 [logger.py:42] Received request cmpl-305bfcccc55b4661898ca78094568d64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:21 [async_llm.py:261] Added request cmpl-305bfcccc55b4661898ca78094568d64-0.
INFO 03-02 01:39:22 [logger.py:42] Received request cmpl-b3355d5b095c4c97a64c8b36c25968b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:22 [async_llm.py:261] Added request cmpl-b3355d5b095c4c97a64c8b36c25968b1-0.
INFO 03-02 01:39:23 [logger.py:42] Received request cmpl-85c022fb2f2c44dea574659f24b7c140-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:23 [async_llm.py:261] Added request cmpl-85c022fb2f2c44dea574659f24b7c140-0.
INFO 03-02 01:39:24 [logger.py:42] Received request cmpl-86bb5e04a441451ea57d9a2e9c5ea9d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:24 [async_llm.py:261] Added request cmpl-86bb5e04a441451ea57d9a2e9c5ea9d4-0.
INFO 03-02 01:39:25 [logger.py:42] Received request cmpl-56e9a6877e094b8ba5d53e8841061f47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:25 [async_llm.py:261] Added request cmpl-56e9a6877e094b8ba5d53e8841061f47-0.
INFO 03-02 01:39:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:39:26 [logger.py:42] Received request cmpl-20ce1def0084444a8698b03452eb9dd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:26 [async_llm.py:261] Added request cmpl-20ce1def0084444a8698b03452eb9dd8-0.
INFO 03-02 01:39:27 [logger.py:42] Received request cmpl-6bf9a0ed8db54482bda7f7e4b3b686cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:27 [async_llm.py:261] Added request cmpl-6bf9a0ed8db54482bda7f7e4b3b686cd-0.
INFO 03-02 01:39:28 [logger.py:42] Received request cmpl-628e7fc527bc4a4d9430dbbb20608134-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:28 [async_llm.py:261] Added request cmpl-628e7fc527bc4a4d9430dbbb20608134-0.
INFO 03-02 01:39:30 [logger.py:42] Received request cmpl-ff582a074991448bb32ff065f5a60d8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:30 [async_llm.py:261] Added request cmpl-ff582a074991448bb32ff065f5a60d8a-0.
INFO 03-02 01:39:31 [logger.py:42] Received request cmpl-56e9b2bd8d9e43f4a4286e166ffe685f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:31 [async_llm.py:261] Added request cmpl-56e9b2bd8d9e43f4a4286e166ffe685f-0.
INFO 03-02 01:39:32 [logger.py:42] Received request cmpl-6c39981255284d57ac87c2bb678a3e8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:32 [async_llm.py:261] Added request cmpl-6c39981255284d57ac87c2bb678a3e8d-0.
INFO 03-02 01:39:33 [logger.py:42] Received request cmpl-20376ba9701048c685309a4acf8bf510-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:33 [async_llm.py:261] Added request cmpl-20376ba9701048c685309a4acf8bf510-0.
INFO 03-02 01:39:34 [logger.py:42] Received request cmpl-09374b3ef5b747dab196ce051e8a9f44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:34 [async_llm.py:261] Added request cmpl-09374b3ef5b747dab196ce051e8a9f44-0.
INFO 03-02 01:39:35 [logger.py:42] Received request cmpl-dcb362f2959b4087b74658ffcceaf18c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:35 [async_llm.py:261] Added request cmpl-dcb362f2959b4087b74658ffcceaf18c-0.
INFO 03-02 01:39:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:39:36 [logger.py:42] Received request cmpl-bd6ccd7bd01b4c26969270eeb9c45c1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:36 [async_llm.py:261] Added request cmpl-bd6ccd7bd01b4c26969270eeb9c45c1c-0.
INFO 03-02 01:39:37 [logger.py:42] Received request cmpl-dc69f71a8a5a47cd9c66eb6ed3cac96c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:37 [async_llm.py:261] Added request cmpl-dc69f71a8a5a47cd9c66eb6ed3cac96c-0.
INFO 03-02 01:39:38 [logger.py:42] Received request cmpl-6cfeef2d7cfe4734be9604cd03172730-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:38 [async_llm.py:261] Added request cmpl-6cfeef2d7cfe4734be9604cd03172730-0.
INFO 03-02 01:39:39 [logger.py:42] Received request cmpl-0e5f652379394b719bc251714f87b702-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:39 [async_llm.py:261] Added request cmpl-0e5f652379394b719bc251714f87b702-0.
INFO 03-02 01:39:40 [logger.py:42] Received request cmpl-a17a119e9c7c488f9da3ef620aef39da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:40 [async_llm.py:261] Added request cmpl-a17a119e9c7c488f9da3ef620aef39da-0.
INFO 03-02 01:39:41 [logger.py:42] Received request cmpl-0648e18e50274eb998fc801ce44caf65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:41 [async_llm.py:261] Added request cmpl-0648e18e50274eb998fc801ce44caf65-0.
INFO 03-02 01:39:43 [logger.py:42] Received request cmpl-54e6c3ac20bd45bd9696f84c2d031167-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:43 [async_llm.py:261] Added request cmpl-54e6c3ac20bd45bd9696f84c2d031167-0.
INFO 03-02 01:39:44 [logger.py:42] Received request cmpl-44fe348705744017b8a903b664b5b244-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:44 [async_llm.py:261] Added request cmpl-44fe348705744017b8a903b664b5b244-0.
INFO 03-02 01:39:45 [logger.py:42] Received request cmpl-26cd00442aa34b3483dc586a286e7426-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:45 [async_llm.py:261] Added request cmpl-26cd00442aa34b3483dc586a286e7426-0.
INFO 03-02 01:39:46 [logger.py:42] Received request cmpl-731639867fe6480199d30897284d2709-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:46 [async_llm.py:261] Added request cmpl-731639867fe6480199d30897284d2709-0.
INFO 03-02 01:39:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:39:47 [logger.py:42] Received request cmpl-4f68bb052dc7453c918abd0ee5aeeb96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:47 [async_llm.py:261] Added request cmpl-4f68bb052dc7453c918abd0ee5aeeb96-0.
INFO 03-02 01:39:48 [logger.py:42] Received request cmpl-8a58619a99f24b3f8be42f82dc617a9f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:48 [async_llm.py:261] Added request cmpl-8a58619a99f24b3f8be42f82dc617a9f-0.
INFO 03-02 01:39:49 [logger.py:42] Received request cmpl-a2e95f9a7af143a0a477d96238e3c38d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:49 [async_llm.py:261] Added request cmpl-a2e95f9a7af143a0a477d96238e3c38d-0.
INFO 03-02 01:39:50 [logger.py:42] Received request cmpl-837a4619689d4cea8ef1cf5a7817ddb1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:50 [async_llm.py:261] Added request cmpl-837a4619689d4cea8ef1cf5a7817ddb1-0.
INFO 03-02 01:39:51 [logger.py:42] Received request cmpl-8c9b64c882ff4aeeb08ceabc54fac8d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:51 [async_llm.py:261] Added request cmpl-8c9b64c882ff4aeeb08ceabc54fac8d3-0.
INFO 03-02 01:39:52 [logger.py:42] Received request cmpl-1b61bcd1fcfc4d7b8e2ad5716fb1b428-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:52 [async_llm.py:261] Added request cmpl-1b61bcd1fcfc4d7b8e2ad5716fb1b428-0.
INFO 03-02 01:39:53 [logger.py:42] Received request cmpl-63eb5adca7084b379bd4a628d3b7b1bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:53 [async_llm.py:261] Added request cmpl-63eb5adca7084b379bd4a628d3b7b1bd-0.
INFO 03-02 01:39:55 [logger.py:42] Received request cmpl-a81482d2747948c3887fb8690d9c7809-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:55 [async_llm.py:261] Added request cmpl-a81482d2747948c3887fb8690d9c7809-0.
INFO 03-02 01:39:56 [logger.py:42] Received request cmpl-69bc90cb968844e684425be3f7a7df15-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:56 [async_llm.py:261] Added request cmpl-69bc90cb968844e684425be3f7a7df15-0.
INFO 03-02 01:39:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:39:57 [logger.py:42] Received request cmpl-172ab9c4e5a0435ba34f94fba04ba6c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:57 [async_llm.py:261] Added request cmpl-172ab9c4e5a0435ba34f94fba04ba6c8-0.
INFO 03-02 01:39:58 [logger.py:42] Received request cmpl-d5248cd3b5744e9892afacb730af6ef8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:58 [async_llm.py:261] Added request cmpl-d5248cd3b5744e9892afacb730af6ef8-0.
INFO 03-02 01:39:59 [logger.py:42] Received request cmpl-be47c5f499ee41fea9652e99caea0b51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:59 [async_llm.py:261] Added request cmpl-be47c5f499ee41fea9652e99caea0b51-0.
INFO 03-02 01:40:00 [logger.py:42] Received request cmpl-10eb2ee8adc54250b90ac17f662bea23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:00 [async_llm.py:261] Added request cmpl-10eb2ee8adc54250b90ac17f662bea23-0.
INFO 03-02 01:40:01 [logger.py:42] Received request cmpl-1e3da97ba1064953858014b89f502d11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:01 [async_llm.py:261] Added request cmpl-1e3da97ba1064953858014b89f502d11-0.
INFO 03-02 01:40:02 [logger.py:42] Received request cmpl-ff159330f43c492992add8c24e93f77c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:02 [async_llm.py:261] Added request cmpl-ff159330f43c492992add8c24e93f77c-0.
INFO 03-02 01:40:03 [logger.py:42] Received request cmpl-44f933a718fb465e84f3b9e7429c8841-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:03 [async_llm.py:261] Added request cmpl-44f933a718fb465e84f3b9e7429c8841-0.
INFO 03-02 01:40:04 [logger.py:42] Received request cmpl-5e5640f132874437af9cf43a03b17e42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:04 [async_llm.py:261] Added request cmpl-5e5640f132874437af9cf43a03b17e42-0.
INFO 03-02 01:40:05 [logger.py:42] Received request cmpl-8b121ac29a564698a84c0035820d5e74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:05 [async_llm.py:261] Added request cmpl-8b121ac29a564698a84c0035820d5e74-0.
INFO 03-02 01:40:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:40:06 [logger.py:42] Received request cmpl-da335337b6be43b2bd0f45b7192eaeec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:06 [async_llm.py:261] Added request cmpl-da335337b6be43b2bd0f45b7192eaeec-0.
INFO 03-02 01:40:08 [logger.py:42] Received request cmpl-efc1098eb0e14b09b0a3c72af15817b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:08 [async_llm.py:261] Added request cmpl-efc1098eb0e14b09b0a3c72af15817b9-0.
INFO 03-02 01:40:09 [logger.py:42] Received request cmpl-44545fe4c7e74d57bea6d7394c3621c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:09 [async_llm.py:261] Added request cmpl-44545fe4c7e74d57bea6d7394c3621c9-0.
INFO 03-02 01:40:10 [logger.py:42] Received request cmpl-c3e7b297d7e845ac9b855c8b90a223f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:10 [async_llm.py:261] Added request cmpl-c3e7b297d7e845ac9b855c8b90a223f6-0.
INFO 03-02 01:40:11 [logger.py:42] Received request cmpl-0d7401a1b3d644c188f440f855c78dab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:11 [async_llm.py:261] Added request cmpl-0d7401a1b3d644c188f440f855c78dab-0.
INFO 03-02 01:40:12 [logger.py:42] Received request cmpl-85e8616107944f59bc426385a365a6ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:12 [async_llm.py:261] Added request cmpl-85e8616107944f59bc426385a365a6ae-0.
INFO 03-02 01:40:13 [logger.py:42] Received request cmpl-6e8f0d45bc7f4b4aa332f6767a2eb59f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:13 [async_llm.py:261] Added request cmpl-6e8f0d45bc7f4b4aa332f6767a2eb59f-0.
INFO 03-02 01:40:14 [logger.py:42] Received request cmpl-04170b7ea9c54958ac2259e02fa29eb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:14 [async_llm.py:261] Added request cmpl-04170b7ea9c54958ac2259e02fa29eb5-0.
INFO 03-02 01:40:15 [logger.py:42] Received request cmpl-b75a87a3fdf34b9bb0ca8fe4c56d4454-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:15 [async_llm.py:261] Added request cmpl-b75a87a3fdf34b9bb0ca8fe4c56d4454-0.
INFO 03-02 01:40:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:40:16 [logger.py:42] Received request cmpl-5e2ff79a2c7048d79b3f716b10e73f57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:16 [async_llm.py:261] Added request cmpl-5e2ff79a2c7048d79b3f716b10e73f57-0.
INFO 03-02 01:40:17 [logger.py:42] Received request cmpl-cf1c2e3fde5b42a282a95096af5625d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:17 [async_llm.py:261] Added request cmpl-cf1c2e3fde5b42a282a95096af5625d7-0.
INFO 03-02 01:40:18 [logger.py:42] Received request cmpl-6fa46247df2e4706a817741231d1d95a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:18 [async_llm.py:261] Added request cmpl-6fa46247df2e4706a817741231d1d95a-0.
INFO 03-02 01:40:19 [logger.py:42] Received request cmpl-64b072a6a1544321bab118f61d450fdb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:19 [async_llm.py:261] Added request cmpl-64b072a6a1544321bab118f61d450fdb-0.
INFO 03-02 01:40:21 [logger.py:42] Received request cmpl-d7848939589043dfa20cd86f6e50e5fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:21 [async_llm.py:261] Added request cmpl-d7848939589043dfa20cd86f6e50e5fb-0.
INFO 03-02 01:40:22 [logger.py:42] Received request cmpl-84ebc86d2eea446ba76462ba471e06f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:22 [async_llm.py:261] Added request cmpl-84ebc86d2eea446ba76462ba471e06f1-0.
INFO 03-02 01:40:23 [logger.py:42] Received request cmpl-a420b5add5e545bd86fe2849dc4e0b11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:23 [async_llm.py:261] Added request cmpl-a420b5add5e545bd86fe2849dc4e0b11-0.
INFO 03-02 01:40:24 [logger.py:42] Received request cmpl-462ca365958f41d69355f1a0832520ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:24 [async_llm.py:261] Added request cmpl-462ca365958f41d69355f1a0832520ea-0.
INFO 03-02 01:40:25 [logger.py:42] Received request cmpl-924f5d6b5c914b02b4ef7f434a6d419d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:25 [async_llm.py:261] Added request cmpl-924f5d6b5c914b02b4ef7f434a6d419d-0.
INFO 03-02 01:40:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:40:26 [logger.py:42] Received request cmpl-c2a5cd5e30c9477681cced17e649cd92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:26 [async_llm.py:261] Added request cmpl-c2a5cd5e30c9477681cced17e649cd92-0.
INFO 03-02 01:40:27 [logger.py:42] Received request cmpl-33e8111fb2b44ccfa6acd1dc2bacba29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:27 [async_llm.py:261] Added request cmpl-33e8111fb2b44ccfa6acd1dc2bacba29-0.
INFO 03-02 01:40:28 [logger.py:42] Received request cmpl-5a542fbd8c394bf68d8268b0a2959f04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:28 [async_llm.py:261] Added request cmpl-5a542fbd8c394bf68d8268b0a2959f04-0.
INFO 03-02 01:40:29 [logger.py:42] Received request cmpl-6cec9a539b4e4f6a804a41a4b014767b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:29 [async_llm.py:261] Added request cmpl-6cec9a539b4e4f6a804a41a4b014767b-0.
INFO 03-02 01:40:30 [logger.py:42] Received request cmpl-1a8a11aa1ec74a8f9be1e65c78bef361-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:30 [async_llm.py:261] Added request cmpl-1a8a11aa1ec74a8f9be1e65c78bef361-0.
INFO 03-02 01:40:31 [logger.py:42] Received request cmpl-34dc9a3349604b3789668e99dd040af8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:31 [async_llm.py:261] Added request cmpl-34dc9a3349604b3789668e99dd040af8-0.
INFO 03-02 01:40:32 [logger.py:42] Received request cmpl-7617d54f6672472cac9fe0aaea01401d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:32 [async_llm.py:261] Added request cmpl-7617d54f6672472cac9fe0aaea01401d-0.
INFO 03-02 01:40:34 [logger.py:42] Received request cmpl-4bac846e631b4576bd2ccd274c4dc990-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:34 [async_llm.py:261] Added request cmpl-4bac846e631b4576bd2ccd274c4dc990-0.
INFO 03-02 01:40:35 [logger.py:42] Received request cmpl-f6f251a0485347788d6591da57a5d1c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:35 [async_llm.py:261] Added request cmpl-f6f251a0485347788d6591da57a5d1c4-0.
INFO 03-02 01:40:36 [logger.py:42] Received request cmpl-516290cb5abb457db64caf256f144290-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:36 [async_llm.py:261] Added request cmpl-516290cb5abb457db64caf256f144290-0.
INFO 03-02 01:40:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:40:37 [logger.py:42] Received request cmpl-f54674114047481b9ee17163d2144e63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:37 [async_llm.py:261] Added request cmpl-f54674114047481b9ee17163d2144e63-0.
INFO 03-02 01:40:38 [logger.py:42] Received request cmpl-3ee565aa26974f8fb52a61e3f8f23384-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:38 [async_llm.py:261] Added request cmpl-3ee565aa26974f8fb52a61e3f8f23384-0.
INFO 03-02 01:40:39 [logger.py:42] Received request cmpl-34e9d7800e884b04be8f25434822d2fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:39 [async_llm.py:261] Added request cmpl-34e9d7800e884b04be8f25434822d2fb-0.
INFO 03-02 01:40:40 [logger.py:42] Received request cmpl-3b02505657124a039e8e2c8c9f8b3829-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:40 [async_llm.py:261] Added request cmpl-3b02505657124a039e8e2c8c9f8b3829-0.
INFO 03-02 01:40:41 [logger.py:42] Received request cmpl-14ff93f1931746d291b2068650dbc1b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:41 [async_llm.py:261] Added request cmpl-14ff93f1931746d291b2068650dbc1b0-0.
INFO 03-02 01:40:42 [logger.py:42] Received request cmpl-c6bdd0d7a8a642cebadd7760a3f336c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:42 [async_llm.py:261] Added request cmpl-c6bdd0d7a8a642cebadd7760a3f336c1-0.
INFO 03-02 01:40:43 [logger.py:42] Received request cmpl-21269b3223064fb39ad3f4b7745a187d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:43 [async_llm.py:261] Added request cmpl-21269b3223064fb39ad3f4b7745a187d-0.
INFO 03-02 01:40:44 [logger.py:42] Received request cmpl-9e42aa63e797485a8e152f869552454d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:44 [async_llm.py:261] Added request cmpl-9e42aa63e797485a8e152f869552454d-0.
INFO 03-02 01:40:45 [logger.py:42] Received request cmpl-acc0e71452ab457390c5174ff42d33c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:45 [async_llm.py:261] Added request cmpl-acc0e71452ab457390c5174ff42d33c3-0.
INFO 03-02 01:40:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:40:47 [logger.py:42] Received request cmpl-836ab056046448f38b0f0658ee285d8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:47 [async_llm.py:261] Added request cmpl-836ab056046448f38b0f0658ee285d8e-0.
INFO 03-02 01:40:48 [logger.py:42] Received request cmpl-07619b1b2b2c43f48f1514bf6382f7c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:48 [async_llm.py:261] Added request cmpl-07619b1b2b2c43f48f1514bf6382f7c1-0.
INFO 03-02 01:40:49 [logger.py:42] Received request cmpl-827ed090f02245bdbdf3ecef7f2a486b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:49 [async_llm.py:261] Added request cmpl-827ed090f02245bdbdf3ecef7f2a486b-0.
INFO 03-02 01:40:50 [logger.py:42] Received request cmpl-e955aae23852494eaa66c5a927f77f78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:50 [async_llm.py:261] Added request cmpl-e955aae23852494eaa66c5a927f77f78-0.
INFO 03-02 01:40:51 [logger.py:42] Received request cmpl-28f72f50367348c7924112b4ddfe18de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:51 [async_llm.py:261] Added request cmpl-28f72f50367348c7924112b4ddfe18de-0.
INFO 03-02 01:40:52 [logger.py:42] Received request cmpl-9673f927c71746ac9d63ae0900d8d587-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:52 [async_llm.py:261] Added request cmpl-9673f927c71746ac9d63ae0900d8d587-0.
INFO 03-02 01:40:53 [logger.py:42] Received request cmpl-0ac773cacd7145158f91ed62c565fc42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:53 [async_llm.py:261] Added request cmpl-0ac773cacd7145158f91ed62c565fc42-0.
INFO 03-02 01:40:54 [logger.py:42] Received request cmpl-c74ffaf675b94e0ab5ac1e7bfc6ffa72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:54 [async_llm.py:261] Added request cmpl-c74ffaf675b94e0ab5ac1e7bfc6ffa72-0.
INFO 03-02 01:40:55 [logger.py:42] Received request cmpl-9eff00298cac4f68be1cb989399ffa81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:55 [async_llm.py:261] Added request cmpl-9eff00298cac4f68be1cb989399ffa81-0.
INFO 03-02 01:40:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:40:56 [logger.py:42] Received request cmpl-74e7c76bf3194c3bb14e81d10b5be995-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:56 [async_llm.py:261] Added request cmpl-74e7c76bf3194c3bb14e81d10b5be995-0.
INFO 03-02 01:40:57 [logger.py:42] Received request cmpl-e1a2077803f1402d9e0a68bd57576d64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:57 [async_llm.py:261] Added request cmpl-e1a2077803f1402d9e0a68bd57576d64-0.
INFO 03-02 01:40:58 [logger.py:42] Received request cmpl-e998c40e51a04e679b01b73f7ff94ef3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:58 [async_llm.py:261] Added request cmpl-e998c40e51a04e679b01b73f7ff94ef3-0.
INFO 03-02 01:41:00 [logger.py:42] Received request cmpl-453752ea63254ed798a4b9e0f179e47e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:00 [async_llm.py:261] Added request cmpl-453752ea63254ed798a4b9e0f179e47e-0.
INFO 03-02 01:41:01 [logger.py:42] Received request cmpl-f450329ecde3409ca9e569da536ca6b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:01 [async_llm.py:261] Added request cmpl-f450329ecde3409ca9e569da536ca6b0-0.
INFO 03-02 01:41:02 [logger.py:42] Received request cmpl-0336b626a7bb4318ac062a474b842c6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:02 [async_llm.py:261] Added request cmpl-0336b626a7bb4318ac062a474b842c6f-0.
INFO 03-02 01:41:03 [logger.py:42] Received request cmpl-070c144da5fb40b8bad5aab92c2ff01e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:03 [async_llm.py:261] Added request cmpl-070c144da5fb40b8bad5aab92c2ff01e-0.
INFO 03-02 01:41:04 [logger.py:42] Received request cmpl-ee197d50ead0445593bed7c5abdb54be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:04 [async_llm.py:261] Added request cmpl-ee197d50ead0445593bed7c5abdb54be-0.
INFO 03-02 01:41:05 [logger.py:42] Received request cmpl-0d7f08e2fee643709b41ec0f3e17250c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:05 [async_llm.py:261] Added request cmpl-0d7f08e2fee643709b41ec0f3e17250c-0.
INFO 03-02 01:41:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:41:06 [logger.py:42] Received request cmpl-95bfc77f27d7483e9d1d1c47c03e3ae5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:06 [async_llm.py:261] Added request cmpl-95bfc77f27d7483e9d1d1c47c03e3ae5-0.
INFO 03-02 01:41:07 [logger.py:42] Received request cmpl-8517ee0996244237bf95c9ca7cab3d27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:07 [async_llm.py:261] Added request cmpl-8517ee0996244237bf95c9ca7cab3d27-0.
INFO 03-02 01:41:08 [logger.py:42] Received request cmpl-033b7273d11e4b8189ee2ef0ae3d658d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:08 [async_llm.py:261] Added request cmpl-033b7273d11e4b8189ee2ef0ae3d658d-0.
INFO 03-02 01:41:09 [logger.py:42] Received request cmpl-fef1e968201a4623a038e06c20165b97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:09 [async_llm.py:261] Added request cmpl-fef1e968201a4623a038e06c20165b97-0.
INFO 03-02 01:41:10 [logger.py:42] Received request cmpl-c961e802ac98409cbb02484930eece6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:10 [async_llm.py:261] Added request cmpl-c961e802ac98409cbb02484930eece6b-0.
INFO 03-02 01:41:11 [logger.py:42] Received request cmpl-38749624688a4ee8b2550fd26414d277-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:11 [async_llm.py:261] Added request cmpl-38749624688a4ee8b2550fd26414d277-0.
INFO 03-02 01:41:13 [logger.py:42] Received request cmpl-35774e3b653b49d1a0635b52cb930114-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:13 [async_llm.py:261] Added request cmpl-35774e3b653b49d1a0635b52cb930114-0.
INFO 03-02 01:41:14 [logger.py:42] Received request cmpl-f1e3f7c410b946e9892fe8c73e9bc411-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:14 [async_llm.py:261] Added request cmpl-f1e3f7c410b946e9892fe8c73e9bc411-0.
INFO 03-02 01:41:15 [logger.py:42] Received request cmpl-c4838c977ee24acebdc9f62fc93c8d8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:15 [async_llm.py:261] Added request cmpl-c4838c977ee24acebdc9f62fc93c8d8b-0.
INFO 03-02 01:41:16 [logger.py:42] Received request cmpl-6b70103b73e14a988b5474fd8177efe6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:16 [async_llm.py:261] Added request cmpl-6b70103b73e14a988b5474fd8177efe6-0.
INFO 03-02 01:41:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:41:17 [logger.py:42] Received request cmpl-7cd7b6a94b6e4ec89d2850c0b03258eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:17 [async_llm.py:261] Added request cmpl-7cd7b6a94b6e4ec89d2850c0b03258eb-0.
INFO 03-02 01:41:18 [logger.py:42] Received request cmpl-8130ef6a4fae4cc098cf22b9d5a9a22e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:18 [async_llm.py:261] Added request cmpl-8130ef6a4fae4cc098cf22b9d5a9a22e-0.
INFO 03-02 01:41:19 [logger.py:42] Received request cmpl-ec684c7f43d144efa1cfdd7cb8a10f96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:19 [async_llm.py:261] Added request cmpl-ec684c7f43d144efa1cfdd7cb8a10f96-0.
INFO 03-02 01:41:20 [logger.py:42] Received request cmpl-93171124179649bdabe29f34193fcd94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:20 [async_llm.py:261] Added request cmpl-93171124179649bdabe29f34193fcd94-0.
INFO 03-02 01:41:21 [logger.py:42] Received request cmpl-51400b2d2b09477088ebde5332fabd02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:21 [async_llm.py:261] Added request cmpl-51400b2d2b09477088ebde5332fabd02-0.
INFO 03-02 01:41:22 [logger.py:42] Received request cmpl-02cf43ba0d50482cb882a30a95ca17a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:22 [async_llm.py:261] Added request cmpl-02cf43ba0d50482cb882a30a95ca17a4-0.
INFO 03-02 01:41:23 [logger.py:42] Received request cmpl-b20c1483e0b444c3bee9d4d6fb6198bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:23 [async_llm.py:261] Added request cmpl-b20c1483e0b444c3bee9d4d6fb6198bc-0.
INFO 03-02 01:41:25 [logger.py:42] Received request cmpl-1eac0d4d597d4a3c94c6567e21154733-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:25 [async_llm.py:261] Added request cmpl-1eac0d4d597d4a3c94c6567e21154733-0.
INFO 03-02 01:41:26 [logger.py:42] Received request cmpl-09a9dcce3e9b438398a4e6e265d891f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:26 [async_llm.py:261] Added request cmpl-09a9dcce3e9b438398a4e6e265d891f9-0.
INFO 03-02 01:41:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:41:27 [logger.py:42] Received request cmpl-e6488448aaf24701ba787f0dd0309322-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:27 [async_llm.py:261] Added request cmpl-e6488448aaf24701ba787f0dd0309322-0.
INFO 03-02 01:41:28 [logger.py:42] Received request cmpl-b964424f860e4c8ab7d69bee04cadb62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:28 [async_llm.py:261] Added request cmpl-b964424f860e4c8ab7d69bee04cadb62-0.
INFO 03-02 01:41:29 [logger.py:42] Received request cmpl-cde2a4f232ab43d5a0d377bbba8a7a71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:29 [async_llm.py:261] Added request cmpl-cde2a4f232ab43d5a0d377bbba8a7a71-0.
INFO 03-02 01:41:30 [logger.py:42] Received request cmpl-bb635dd7d85a41d2abfd93181e5f46b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:30 [async_llm.py:261] Added request cmpl-bb635dd7d85a41d2abfd93181e5f46b4-0.
INFO 03-02 01:41:31 [logger.py:42] Received request cmpl-cbdccba7bbc04fd2910734f666a5d509-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:31 [async_llm.py:261] Added request cmpl-cbdccba7bbc04fd2910734f666a5d509-0.
INFO 03-02 01:41:32 [logger.py:42] Received request cmpl-8b175879922f4ab4b623a74c209914fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:32 [async_llm.py:261] Added request cmpl-8b175879922f4ab4b623a74c209914fc-0.
INFO 03-02 01:41:33 [logger.py:42] Received request cmpl-5cee93f758bc499b8aade97b8a31d1e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:33 [async_llm.py:261] Added request cmpl-5cee93f758bc499b8aade97b8a31d1e9-0.
INFO 03-02 01:41:34 [logger.py:42] Received request cmpl-21f3f33a69824d0c8db37b4dd28ec8cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:34 [async_llm.py:261] Added request cmpl-21f3f33a69824d0c8db37b4dd28ec8cf-0.
INFO 03-02 01:41:35 [logger.py:42] Received request cmpl-bf48725ea6884c5e8addf407450b3e5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:35 [async_llm.py:261] Added request cmpl-bf48725ea6884c5e8addf407450b3e5f-0.
INFO 03-02 01:41:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:41:36 [logger.py:42] Received request cmpl-785d2919386840b0bcd4577e5233b558-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:36 [async_llm.py:261] Added request cmpl-785d2919386840b0bcd4577e5233b558-0.
INFO 03-02 01:41:38 [logger.py:42] Received request cmpl-e0a1e5ba0aa74992b7f9d92f8f61edfb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:38 [async_llm.py:261] Added request cmpl-e0a1e5ba0aa74992b7f9d92f8f61edfb-0.
INFO 03-02 01:41:39 [logger.py:42] Received request cmpl-6b9f5e9f21764827a610074d7d44d9ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:39 [async_llm.py:261] Added request cmpl-6b9f5e9f21764827a610074d7d44d9ba-0.
INFO 03-02 01:41:40 [logger.py:42] Received request cmpl-e10382f4e150467aa2c58d733f61e08b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:40 [async_llm.py:261] Added request cmpl-e10382f4e150467aa2c58d733f61e08b-0.
INFO 03-02 01:41:41 [logger.py:42] Received request cmpl-f11af4b7a409412f9dc652b7ecac1669-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:41 [async_llm.py:261] Added request cmpl-f11af4b7a409412f9dc652b7ecac1669-0.
INFO 03-02 01:41:42 [logger.py:42] Received request cmpl-8b65e61b38a54ab8bc9797d4d6c36d44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:42 [async_llm.py:261] Added request cmpl-8b65e61b38a54ab8bc9797d4d6c36d44-0.
INFO 03-02 01:41:43 [logger.py:42] Received request cmpl-6b9f75a1d5b7468ca58bd168c9a29218-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:43 [async_llm.py:261] Added request cmpl-6b9f75a1d5b7468ca58bd168c9a29218-0.
INFO 03-02 01:41:44 [logger.py:42] Received request cmpl-94e48bdac0be4c4487d6419c1b8afa60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:44 [async_llm.py:261] Added request cmpl-94e48bdac0be4c4487d6419c1b8afa60-0.
INFO 03-02 01:41:45 [logger.py:42] Received request cmpl-100e8d41936446bda89d28aeda605ad4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:45 [async_llm.py:261] Added request cmpl-100e8d41936446bda89d28aeda605ad4-0.
INFO 03-02 01:41:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:41:46 [logger.py:42] Received request cmpl-572f5e01e5aa415aae31b1a8034b58d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:46 [async_llm.py:261] Added request cmpl-572f5e01e5aa415aae31b1a8034b58d1-0.
INFO 03-02 01:41:47 [logger.py:42] Received request cmpl-f1d94050f2c247f8bafa6f168614ebbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:47 [async_llm.py:261] Added request cmpl-f1d94050f2c247f8bafa6f168614ebbe-0.
INFO 03-02 01:41:48 [logger.py:42] Received request cmpl-5781133318084ad68f5a9e0262c8c486-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:48 [async_llm.py:261] Added request cmpl-5781133318084ad68f5a9e0262c8c486-0.
INFO 03-02 01:41:49 [logger.py:42] Received request cmpl-19ba863d54b4412486e2707234c1136c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:49 [async_llm.py:261] Added request cmpl-19ba863d54b4412486e2707234c1136c-0.
INFO 03-02 01:41:51 [logger.py:42] Received request cmpl-cfad9f8fa7e14e609e22fab420099f43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:51 [async_llm.py:261] Added request cmpl-cfad9f8fa7e14e609e22fab420099f43-0.
INFO 03-02 01:41:52 [logger.py:42] Received request cmpl-340ac28940a04ef1b2ab0cbaf6fabcf2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:52 [async_llm.py:261] Added request cmpl-340ac28940a04ef1b2ab0cbaf6fabcf2-0.
INFO 03-02 01:41:53 [logger.py:42] Received request cmpl-c7f15470257c40fea8c5b82c277eb514-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:53 [async_llm.py:261] Added request cmpl-c7f15470257c40fea8c5b82c277eb514-0.
INFO 03-02 01:41:54 [logger.py:42] Received request cmpl-10098895b4684642809777e514a75063-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:54 [async_llm.py:261] Added request cmpl-10098895b4684642809777e514a75063-0.
INFO 03-02 01:41:55 [logger.py:42] Received request cmpl-9e939ee5b1c3436c89fec8a87221d7f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:55 [async_llm.py:261] Added request cmpl-9e939ee5b1c3436c89fec8a87221d7f7-0.
INFO 03-02 01:41:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:41:56 [logger.py:42] Received request cmpl-e20992423575484f8591d642efbb6b8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:56 [async_llm.py:261] Added request cmpl-e20992423575484f8591d642efbb6b8e-0.
INFO 03-02 01:41:57 [logger.py:42] Received request cmpl-f2637800092247d3bece3b68c91734bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:57 [async_llm.py:261] Added request cmpl-f2637800092247d3bece3b68c91734bb-0.
INFO 03-02 01:41:58 [logger.py:42] Received request cmpl-b8de4218fde64f9e8d4c8b9f0f55430d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:58 [async_llm.py:261] Added request cmpl-b8de4218fde64f9e8d4c8b9f0f55430d-0.
INFO 03-02 01:41:59 [logger.py:42] Received request cmpl-0bad9639e9b64b0ca9fb2b8e097d2a0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:59 [async_llm.py:261] Added request cmpl-0bad9639e9b64b0ca9fb2b8e097d2a0d-0.
INFO 03-02 01:42:00 [logger.py:42] Received request cmpl-994c9849693145619c82429296a362bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:00 [async_llm.py:261] Added request cmpl-994c9849693145619c82429296a362bc-0.
INFO 03-02 01:42:01 [logger.py:42] Received request cmpl-7b3d2a44ba1749a48a5ceef7326edf48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:01 [async_llm.py:261] Added request cmpl-7b3d2a44ba1749a48a5ceef7326edf48-0.
INFO 03-02 01:42:02 [logger.py:42] Received request cmpl-606312f1fd8349a7adcf27513af06f95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:02 [async_llm.py:261] Added request cmpl-606312f1fd8349a7adcf27513af06f95-0.
INFO 03-02 01:42:04 [logger.py:42] Received request cmpl-1efdeffbf4864944a34a06f6ece1db23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:04 [async_llm.py:261] Added request cmpl-1efdeffbf4864944a34a06f6ece1db23-0.
INFO 03-02 01:42:05 [logger.py:42] Received request cmpl-b3a2848ce09844ebb937a09be1c25b80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:05 [async_llm.py:261] Added request cmpl-b3a2848ce09844ebb937a09be1c25b80-0.
INFO 03-02 01:42:06 [logger.py:42] Received request cmpl-4f79261d4f1a433cbd37e38451874be5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:06 [async_llm.py:261] Added request cmpl-4f79261d4f1a433cbd37e38451874be5-0.
INFO 03-02 01:42:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:42:07 [logger.py:42] Received request cmpl-a0ebbd4d09e04c25846fb0f1968a32a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:07 [async_llm.py:261] Added request cmpl-a0ebbd4d09e04c25846fb0f1968a32a6-0.
INFO 03-02 01:42:08 [logger.py:42] Received request cmpl-489e0dc946fd49c699697714d3a2c079-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:08 [async_llm.py:261] Added request cmpl-489e0dc946fd49c699697714d3a2c079-0.
INFO 03-02 01:42:09 [logger.py:42] Received request cmpl-9fe5de7afc994b33bfffa2301dc0e6cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:09 [async_llm.py:261] Added request cmpl-9fe5de7afc994b33bfffa2301dc0e6cb-0.
INFO 03-02 01:42:10 [logger.py:42] Received request cmpl-10a730f9a06b4a6498628131934640b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:10 [async_llm.py:261] Added request cmpl-10a730f9a06b4a6498628131934640b1-0.
INFO 03-02 01:42:11 [logger.py:42] Received request cmpl-6f53120364974c34aa9531a9dc2628a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:11 [async_llm.py:261] Added request cmpl-6f53120364974c34aa9531a9dc2628a9-0.
INFO 03-02 01:42:12 [logger.py:42] Received request cmpl-80adafdd5f604253bb77137307ee97a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:12 [async_llm.py:261] Added request cmpl-80adafdd5f604253bb77137307ee97a1-0.
INFO 03-02 01:42:13 [logger.py:42] Received request cmpl-3f9d817711064fb59a3cb117df5e51ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:13 [async_llm.py:261] Added request cmpl-3f9d817711064fb59a3cb117df5e51ae-0.
INFO 03-02 01:42:14 [logger.py:42] Received request cmpl-aa8d462688a249ac8c0ef496a8d1b5c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:14 [async_llm.py:261] Added request cmpl-aa8d462688a249ac8c0ef496a8d1b5c2-0.
INFO 03-02 01:42:15 [logger.py:42] Received request cmpl-f69109fc57e045dfa594bea25a60008c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:15 [async_llm.py:261] Added request cmpl-f69109fc57e045dfa594bea25a60008c-0.
INFO 03-02 01:42:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:42:17 [logger.py:42] Received request cmpl-8d71ec90c3404ee4bc6b3175b6bdb068-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:17 [async_llm.py:261] Added request cmpl-8d71ec90c3404ee4bc6b3175b6bdb068-0.
INFO 03-02 01:42:18 [logger.py:42] Received request cmpl-0eb032ede3a94dcda8d6a01c11c8d4a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:18 [async_llm.py:261] Added request cmpl-0eb032ede3a94dcda8d6a01c11c8d4a9-0.
INFO 03-02 01:42:19 [logger.py:42] Received request cmpl-2165c55b070441e7a8ea9c3be5a6f74d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:19 [async_llm.py:261] Added request cmpl-2165c55b070441e7a8ea9c3be5a6f74d-0.
INFO 03-02 01:42:20 [logger.py:42] Received request cmpl-33efb81531ad41ea93e4cbbf57197b11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:20 [async_llm.py:261] Added request cmpl-33efb81531ad41ea93e4cbbf57197b11-0.
INFO 03-02 01:42:21 [logger.py:42] Received request cmpl-86e8ebcbf8bf4562b475a015a58cf213-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:21 [async_llm.py:261] Added request cmpl-86e8ebcbf8bf4562b475a015a58cf213-0.
INFO 03-02 01:42:22 [logger.py:42] Received request cmpl-a3a56d94be6e4773a7affebdce3d0adb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:22 [async_llm.py:261] Added request cmpl-a3a56d94be6e4773a7affebdce3d0adb-0.
INFO 03-02 01:42:23 [logger.py:42] Received request cmpl-cecbec50dc694383bade8fda6e466386-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:23 [async_llm.py:261] Added request cmpl-cecbec50dc694383bade8fda6e466386-0.
INFO 03-02 01:42:24 [logger.py:42] Received request cmpl-d9cf64f712c24001b8a214fdaa108c81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:24 [async_llm.py:261] Added request cmpl-d9cf64f712c24001b8a214fdaa108c81-0.
INFO 03-02 01:42:25 [logger.py:42] Received request cmpl-262deafac7534d5c8717c172a406d9e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:25 [async_llm.py:261] Added request cmpl-262deafac7534d5c8717c172a406d9e5-0.
INFO 03-02 01:42:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:42:26 [logger.py:42] Received request cmpl-63599d65df1d4b6b8ab7b3c10e84254a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:26 [async_llm.py:261] Added request cmpl-63599d65df1d4b6b8ab7b3c10e84254a-0.
INFO 03-02 01:42:27 [logger.py:42] Received request cmpl-e81da5ab42554f8b940956d375f128e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:27 [async_llm.py:261] Added request cmpl-e81da5ab42554f8b940956d375f128e1-0.
INFO 03-02 01:42:28 [logger.py:42] Received request cmpl-d0ede57827bd48d183f4559ce6f9932c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:28 [async_llm.py:261] Added request cmpl-d0ede57827bd48d183f4559ce6f9932c-0.
INFO 03-02 01:42:30 [logger.py:42] Received request cmpl-51a945688cd7414dabad42c8385892db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:30 [async_llm.py:261] Added request cmpl-51a945688cd7414dabad42c8385892db-0.
INFO 03-02 01:42:31 [logger.py:42] Received request cmpl-5a801f29bdf140fe8f2282a0d968d683-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:31 [async_llm.py:261] Added request cmpl-5a801f29bdf140fe8f2282a0d968d683-0.
INFO 03-02 01:42:32 [logger.py:42] Received request cmpl-e6c4d7f82d6440c484271eddaa94aa44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:32 [async_llm.py:261] Added request cmpl-e6c4d7f82d6440c484271eddaa94aa44-0.
INFO 03-02 01:42:33 [logger.py:42] Received request cmpl-f4661d6e68ec4feba85c89c0b5004308-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:33 [async_llm.py:261] Added request cmpl-f4661d6e68ec4feba85c89c0b5004308-0.
INFO 03-02 01:42:34 [logger.py:42] Received request cmpl-442fb3e7994240a184562db01fc65b80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:34 [async_llm.py:261] Added request cmpl-442fb3e7994240a184562db01fc65b80-0.
INFO 03-02 01:42:35 [logger.py:42] Received request cmpl-8360d4cae7b04484a3d5131b8aff0041-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:35 [async_llm.py:261] Added request cmpl-8360d4cae7b04484a3d5131b8aff0041-0.
INFO 03-02 01:42:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:42:36 [logger.py:42] Received request cmpl-7417649aac3d4a43aa3c4460fc764d2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:36 [async_llm.py:261] Added request cmpl-7417649aac3d4a43aa3c4460fc764d2d-0.
INFO 03-02 01:42:37 [logger.py:42] Received request cmpl-87b1155dcb5d4a2d807e057a5e544045-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:37 [async_llm.py:261] Added request cmpl-87b1155dcb5d4a2d807e057a5e544045-0.
INFO 03-02 01:42:38 [logger.py:42] Received request cmpl-f08c89d56b2d4f3e859b10e46459b872-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:38 [async_llm.py:261] Added request cmpl-f08c89d56b2d4f3e859b10e46459b872-0.
INFO 03-02 01:42:39 [logger.py:42] Received request cmpl-e81b87835f914d61914274de4ca8a8c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:39 [async_llm.py:261] Added request cmpl-e81b87835f914d61914274de4ca8a8c8-0.
INFO 03-02 01:42:40 [logger.py:42] Received request cmpl-3bbc3c0b7efb432ebd670701f573b2b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:40 [async_llm.py:261] Added request cmpl-3bbc3c0b7efb432ebd670701f573b2b4-0.
INFO 03-02 01:42:41 [logger.py:42] Received request cmpl-a409d51a005d4d938aec1750d3aca83f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:41 [async_llm.py:261] Added request cmpl-a409d51a005d4d938aec1750d3aca83f-0.
INFO 03-02 01:42:43 [logger.py:42] Received request cmpl-525c62634f2f4f2e924c9bfa5f7db743-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:43 [async_llm.py:261] Added request cmpl-525c62634f2f4f2e924c9bfa5f7db743-0.
INFO 03-02 01:42:44 [logger.py:42] Received request cmpl-8c772df206e1413ea7be9f48826d142e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:44 [async_llm.py:261] Added request cmpl-8c772df206e1413ea7be9f48826d142e-0.
INFO 03-02 01:42:45 [logger.py:42] Received request cmpl-85bcaa2fe3f947f6a28a31feb73c0ed1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:45 [async_llm.py:261] Added request cmpl-85bcaa2fe3f947f6a28a31feb73c0ed1-0.
INFO 03-02 01:42:46 [logger.py:42] Received request cmpl-af60ff70aaca44b5a7b1c09df2e578e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:46 [async_llm.py:261] Added request cmpl-af60ff70aaca44b5a7b1c09df2e578e6-0.
INFO 03-02 01:42:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:42:47 [logger.py:42] Received request cmpl-7858c169d79546eab40ce33b35c83c22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:47 [async_llm.py:261] Added request cmpl-7858c169d79546eab40ce33b35c83c22-0.
INFO 03-02 01:42:48 [logger.py:42] Received request cmpl-fb74da1d6314456391280030086e01f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:48 [async_llm.py:261] Added request cmpl-fb74da1d6314456391280030086e01f0-0.
INFO 03-02 01:42:49 [logger.py:42] Received request cmpl-e8934798ad9c4ae2a129563d8756e4ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:49 [async_llm.py:261] Added request cmpl-e8934798ad9c4ae2a129563d8756e4ce-0.
INFO 03-02 01:42:50 [logger.py:42] Received request cmpl-b1b2e6aaea684ebfbea4055e0e30dfc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:50 [async_llm.py:261] Added request cmpl-b1b2e6aaea684ebfbea4055e0e30dfc9-0.
INFO 03-02 01:42:51 [logger.py:42] Received request cmpl-1324e69127e74165ada16f4d7340f789-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:51 [async_llm.py:261] Added request cmpl-1324e69127e74165ada16f4d7340f789-0.
INFO 03-02 01:42:52 [logger.py:42] Received request cmpl-dbc37914e3d9484a918d944ad0b9261c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:52 [async_llm.py:261] Added request cmpl-dbc37914e3d9484a918d944ad0b9261c-0.
INFO 03-02 01:42:53 [logger.py:42] Received request cmpl-3355f528667641418fd94ef8e1ca6c5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:53 [async_llm.py:261] Added request cmpl-3355f528667641418fd94ef8e1ca6c5d-0.
INFO 03-02 01:42:55 [logger.py:42] Received request cmpl-510d4ad22150441c8e2882d9b00018ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:55 [async_llm.py:261] Added request cmpl-510d4ad22150441c8e2882d9b00018ae-0.
INFO 03-02 01:42:56 [logger.py:42] Received request cmpl-eadca338e3274a3faf0c58cba698dfa0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:56 [async_llm.py:261] Added request cmpl-eadca338e3274a3faf0c58cba698dfa0-0.
INFO 03-02 01:42:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:42:57 [logger.py:42] Received request cmpl-730e62f580844236808f20af8d527af9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:57 [async_llm.py:261] Added request cmpl-730e62f580844236808f20af8d527af9-0.
INFO 03-02 01:42:58 [logger.py:42] Received request cmpl-f488eb048c564d8db0e822532404b52d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:58 [async_llm.py:261] Added request cmpl-f488eb048c564d8db0e822532404b52d-0.
INFO 03-02 01:42:59 [logger.py:42] Received request cmpl-8574a1dbf06647b0974529bcd7d47f89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:59 [async_llm.py:261] Added request cmpl-8574a1dbf06647b0974529bcd7d47f89-0.
INFO 03-02 01:43:00 [logger.py:42] Received request cmpl-3d28a801a75c4908a866cbb4069994d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:00 [async_llm.py:261] Added request cmpl-3d28a801a75c4908a866cbb4069994d5-0.
INFO 03-02 01:43:01 [logger.py:42] Received request cmpl-7e67d1b8f81e4f4da77fe9709d4e78a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:01 [async_llm.py:261] Added request cmpl-7e67d1b8f81e4f4da77fe9709d4e78a1-0.
INFO 03-02 01:43:02 [logger.py:42] Received request cmpl-ad614f9f7e2c46f29b991423c8fc5ccc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:02 [async_llm.py:261] Added request cmpl-ad614f9f7e2c46f29b991423c8fc5ccc-0.
INFO 03-02 01:43:03 [logger.py:42] Received request cmpl-523d559816884cac9bc6a528e39561d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:03 [async_llm.py:261] Added request cmpl-523d559816884cac9bc6a528e39561d5-0.
INFO 03-02 01:43:04 [logger.py:42] Received request cmpl-1c9c06354eb247a6af5983ed12905eb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:04 [async_llm.py:261] Added request cmpl-1c9c06354eb247a6af5983ed12905eb5-0.
INFO 03-02 01:43:05 [logger.py:42] Received request cmpl-e4b476c139f944c9a032b2bbf24d2cf0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:05 [async_llm.py:261] Added request cmpl-e4b476c139f944c9a032b2bbf24d2cf0-0.
INFO 03-02 01:43:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:43:06 [logger.py:42] Received request cmpl-cd731db60d434af781cb64efb3a37469-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:06 [async_llm.py:261] Added request cmpl-cd731db60d434af781cb64efb3a37469-0.
INFO 03-02 01:43:08 [logger.py:42] Received request cmpl-693d91c77ae9426295f9b80983146e57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:08 [async_llm.py:261] Added request cmpl-693d91c77ae9426295f9b80983146e57-0.
INFO 03-02 01:43:09 [logger.py:42] Received request cmpl-328bd29e04954dfe97d1098f473a1b2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:09 [async_llm.py:261] Added request cmpl-328bd29e04954dfe97d1098f473a1b2e-0.
INFO 03-02 01:43:10 [logger.py:42] Received request cmpl-59250ab980954c4bae4e6d200f8aab66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:10 [async_llm.py:261] Added request cmpl-59250ab980954c4bae4e6d200f8aab66-0.
INFO 03-02 01:43:11 [logger.py:42] Received request cmpl-bec9d42c02c14915acbbcd1ba0f987fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:11 [async_llm.py:261] Added request cmpl-bec9d42c02c14915acbbcd1ba0f987fe-0.
INFO 03-02 01:43:12 [logger.py:42] Received request cmpl-434cfe4388c24d908145df6dc2fb4d9a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:12 [async_llm.py:261] Added request cmpl-434cfe4388c24d908145df6dc2fb4d9a-0.
INFO 03-02 01:43:13 [logger.py:42] Received request cmpl-4f262c9fe3a3455aa9897482c9833aad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:13 [async_llm.py:261] Added request cmpl-4f262c9fe3a3455aa9897482c9833aad-0.
INFO 03-02 01:43:14 [logger.py:42] Received request cmpl-9b43f8ae2a7d49e39dfb008ab55dedfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:14 [async_llm.py:261] Added request cmpl-9b43f8ae2a7d49e39dfb008ab55dedfa-0.
INFO 03-02 01:43:15 [logger.py:42] Received request cmpl-cccb116891b74b33904e8e52e6a22e9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:15 [async_llm.py:261] Added request cmpl-cccb116891b74b33904e8e52e6a22e9e-0.
INFO 03-02 01:43:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:43:16 [logger.py:42] Received request cmpl-c56672cdd5194764876ab6481b600360-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:16 [async_llm.py:261] Added request cmpl-c56672cdd5194764876ab6481b600360-0.
INFO 03-02 01:43:17 [logger.py:42] Received request cmpl-1bc05c54a76c40a6bde0957a8cdeae70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:17 [async_llm.py:261] Added request cmpl-1bc05c54a76c40a6bde0957a8cdeae70-0.
INFO 03-02 01:43:18 [logger.py:42] Received request cmpl-a9a2c7d06f6042fa8d658b0f0bcd02f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:18 [async_llm.py:261] Added request cmpl-a9a2c7d06f6042fa8d658b0f0bcd02f3-0.
INFO 03-02 01:43:19 [logger.py:42] Received request cmpl-0596604956404c988586e93d36551d93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:19 [async_llm.py:261] Added request cmpl-0596604956404c988586e93d36551d93-0.
INFO 03-02 01:43:21 [logger.py:42] Received request cmpl-9664ae7377554688bf833d1a3fd239f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:21 [async_llm.py:261] Added request cmpl-9664ae7377554688bf833d1a3fd239f2-0.
INFO 03-02 01:43:22 [logger.py:42] Received request cmpl-58ad242d3e07475ab348e2bd3eb2fc8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:22 [async_llm.py:261] Added request cmpl-58ad242d3e07475ab348e2bd3eb2fc8c-0.
INFO 03-02 01:43:23 [logger.py:42] Received request cmpl-8012a1954eb74b748440ec569a75d446-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:23 [async_llm.py:261] Added request cmpl-8012a1954eb74b748440ec569a75d446-0.
INFO 03-02 01:43:24 [logger.py:42] Received request cmpl-21a79fe088624ebbaf66f95d7c7637c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:24 [async_llm.py:261] Added request cmpl-21a79fe088624ebbaf66f95d7c7637c7-0.
INFO 03-02 01:43:25 [logger.py:42] Received request cmpl-3b3cc9fa548d4bf4901bafe0a5acadd9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:25 [async_llm.py:261] Added request cmpl-3b3cc9fa548d4bf4901bafe0a5acadd9-0.
INFO 03-02 01:43:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:43:26 [logger.py:42] Received request cmpl-77d9d7b6993d46d8922b1dbd1ec04fa3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:26 [async_llm.py:261] Added request cmpl-77d9d7b6993d46d8922b1dbd1ec04fa3-0.
INFO 03-02 01:43:27 [logger.py:42] Received request cmpl-03f8ec67e3fe456384e2c41a8526d09c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:27 [async_llm.py:261] Added request cmpl-03f8ec67e3fe456384e2c41a8526d09c-0.
INFO 03-02 01:43:28 [logger.py:42] Received request cmpl-349e4e4a817c4b4fbb292894a8de3f70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:28 [async_llm.py:261] Added request cmpl-349e4e4a817c4b4fbb292894a8de3f70-0.
INFO 03-02 01:43:29 [logger.py:42] Received request cmpl-f00ea7dd51144ab882a16e5bd5ebc0b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:29 [async_llm.py:261] Added request cmpl-f00ea7dd51144ab882a16e5bd5ebc0b8-0.
INFO 03-02 01:43:30 [logger.py:42] Received request cmpl-23815a977c4146ec999f2d8e424deb1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:30 [async_llm.py:261] Added request cmpl-23815a977c4146ec999f2d8e424deb1d-0.
INFO 03-02 01:43:31 [logger.py:42] Received request cmpl-d78d249a2b4042768deb5632aac8dc1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:31 [async_llm.py:261] Added request cmpl-d78d249a2b4042768deb5632aac8dc1b-0.
INFO 03-02 01:43:32 [logger.py:42] Received request cmpl-f5ae772b5970436a88762e96039e98c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:32 [async_llm.py:261] Added request cmpl-f5ae772b5970436a88762e96039e98c4-0.
INFO 03-02 01:43:34 [logger.py:42] Received request cmpl-f0e856ee0f6f4360aadc361d1c0b62e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:34 [async_llm.py:261] Added request cmpl-f0e856ee0f6f4360aadc361d1c0b62e7-0.
INFO 03-02 01:43:35 [logger.py:42] Received request cmpl-de1adfacea8842488cdce6d069f2bfba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:35 [async_llm.py:261] Added request cmpl-de1adfacea8842488cdce6d069f2bfba-0.
INFO 03-02 01:43:36 [logger.py:42] Received request cmpl-4caf849571374abb9a855fba38d50b27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:36 [async_llm.py:261] Added request cmpl-4caf849571374abb9a855fba38d50b27-0.
INFO 03-02 01:43:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:43:37 [logger.py:42] Received request cmpl-d70ecf543bfc40a9b6b27b90220d17d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:37 [async_llm.py:261] Added request cmpl-d70ecf543bfc40a9b6b27b90220d17d7-0.
INFO 03-02 01:43:38 [logger.py:42] Received request cmpl-e222681734f941588b859b4798734291-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:38 [async_llm.py:261] Added request cmpl-e222681734f941588b859b4798734291-0.
INFO 03-02 01:43:39 [logger.py:42] Received request cmpl-a38ce738eb2e476385ffc12e9a7fa1ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:39 [async_llm.py:261] Added request cmpl-a38ce738eb2e476385ffc12e9a7fa1ed-0.
INFO 03-02 01:43:40 [logger.py:42] Received request cmpl-6fa9c68ac16946589eb52866ca7cc978-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:40 [async_llm.py:261] Added request cmpl-6fa9c68ac16946589eb52866ca7cc978-0.
INFO 03-02 01:43:41 [logger.py:42] Received request cmpl-71075b3de79a4fc38ad949319e130859-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:41 [async_llm.py:261] Added request cmpl-71075b3de79a4fc38ad949319e130859-0.
INFO 03-02 01:43:42 [logger.py:42] Received request cmpl-af9f8408364c450da066e2ae5c13dac1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:42 [async_llm.py:261] Added request cmpl-af9f8408364c450da066e2ae5c13dac1-0.
INFO 03-02 01:43:43 [logger.py:42] Received request cmpl-2b9e9360a84e452e911e73b13cb2ce21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:43 [async_llm.py:261] Added request cmpl-2b9e9360a84e452e911e73b13cb2ce21-0.
INFO 03-02 01:43:44 [logger.py:42] Received request cmpl-daf4ca3e4d4a4aeeb1232f8435cb59d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:44 [async_llm.py:261] Added request cmpl-daf4ca3e4d4a4aeeb1232f8435cb59d0-0.
INFO 03-02 01:43:45 [logger.py:42] Received request cmpl-1093d47447be4e3a8276093d6622c465-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:45 [async_llm.py:261] Added request cmpl-1093d47447be4e3a8276093d6622c465-0.
INFO 03-02 01:43:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:43:47 [logger.py:42] Received request cmpl-91a2bf2c1067409eb10b71b593951f4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:47 [async_llm.py:261] Added request cmpl-91a2bf2c1067409eb10b71b593951f4a-0.
INFO 03-02 01:43:48 [logger.py:42] Received request cmpl-2e7b2f0dba1646c19f2c8d08f647e18e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:48 [async_llm.py:261] Added request cmpl-2e7b2f0dba1646c19f2c8d08f647e18e-0.
INFO 03-02 01:43:49 [logger.py:42] Received request cmpl-23fd4820b0ce4efebea61010d39afbfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:49 [async_llm.py:261] Added request cmpl-23fd4820b0ce4efebea61010d39afbfa-0.
INFO 03-02 01:43:50 [logger.py:42] Received request cmpl-2aa1bccb654b44dc89eb916128f59b3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:50 [async_llm.py:261] Added request cmpl-2aa1bccb654b44dc89eb916128f59b3c-0.
INFO 03-02 01:43:51 [logger.py:42] Received request cmpl-573abb47cbe0483681cdb6fbc3a76d64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:51 [async_llm.py:261] Added request cmpl-573abb47cbe0483681cdb6fbc3a76d64-0.
INFO 03-02 01:43:52 [logger.py:42] Received request cmpl-cdeeddd607f24425af5bf0b4387b2488-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:52 [async_llm.py:261] Added request cmpl-cdeeddd607f24425af5bf0b4387b2488-0.
INFO 03-02 01:43:53 [logger.py:42] Received request cmpl-dfe2a3eec04e4dff83629ee4b1b663de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:53 [async_llm.py:261] Added request cmpl-dfe2a3eec04e4dff83629ee4b1b663de-0.
INFO 03-02 01:43:54 [logger.py:42] Received request cmpl-5e3a48be2ac74d0abe3add1556f83a4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:54 [async_llm.py:261] Added request cmpl-5e3a48be2ac74d0abe3add1556f83a4d-0.
INFO 03-02 01:43:55 [logger.py:42] Received request cmpl-c7b6907b5b364c8aa496e6a760baeb7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:55 [async_llm.py:261] Added request cmpl-c7b6907b5b364c8aa496e6a760baeb7c-0.
INFO 03-02 01:43:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:43:56 [logger.py:42] Received request cmpl-3a3aef023c45427cb7b065cd802dfcd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:56 [async_llm.py:261] Added request cmpl-3a3aef023c45427cb7b065cd802dfcd5-0.
INFO 03-02 01:43:57 [logger.py:42] Received request cmpl-beaa7d9f529540c9a5587e794befbf36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:57 [async_llm.py:261] Added request cmpl-beaa7d9f529540c9a5587e794befbf36-0.
INFO 03-02 01:43:58 [logger.py:42] Received request cmpl-6cb488576e464c07b34fd7bfa048140c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:43:58 [async_llm.py:261] Added request cmpl-6cb488576e464c07b34fd7bfa048140c-0.
INFO 03-02 01:44:00 [logger.py:42] Received request cmpl-ffe16ceae6394e94adc4aca582ad57df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:00 [async_llm.py:261] Added request cmpl-ffe16ceae6394e94adc4aca582ad57df-0.
INFO 03-02 01:44:01 [logger.py:42] Received request cmpl-5674adfc993b424190eeb7bbcf9c69b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:01 [async_llm.py:261] Added request cmpl-5674adfc993b424190eeb7bbcf9c69b3-0.
INFO 03-02 01:44:02 [logger.py:42] Received request cmpl-727754d766ef4008af8186823f5fc843-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:02 [async_llm.py:261] Added request cmpl-727754d766ef4008af8186823f5fc843-0.
INFO 03-02 01:44:03 [logger.py:42] Received request cmpl-779282df0cef4e4ca6288ce736c130a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:03 [async_llm.py:261] Added request cmpl-779282df0cef4e4ca6288ce736c130a2-0.
INFO 03-02 01:44:04 [logger.py:42] Received request cmpl-65bd0ce77a2f41fe84f196a2a8721696-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:04 [async_llm.py:261] Added request cmpl-65bd0ce77a2f41fe84f196a2a8721696-0.
INFO 03-02 01:44:05 [logger.py:42] Received request cmpl-a7e9530427d04e2e8339c45193f23b6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:05 [async_llm.py:261] Added request cmpl-a7e9530427d04e2e8339c45193f23b6b-0.
INFO 03-02 01:44:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:44:06 [logger.py:42] Received request cmpl-306cc38559604ac7bcea007792126af0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:06 [async_llm.py:261] Added request cmpl-306cc38559604ac7bcea007792126af0-0.
INFO 03-02 01:44:07 [logger.py:42] Received request cmpl-bf0d6e28540e414c857a2bb619273089-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:07 [async_llm.py:261] Added request cmpl-bf0d6e28540e414c857a2bb619273089-0.
INFO 03-02 01:44:08 [logger.py:42] Received request cmpl-3d02830bb81f416888a42bcabd6c2f4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:08 [async_llm.py:261] Added request cmpl-3d02830bb81f416888a42bcabd6c2f4c-0.
INFO 03-02 01:44:09 [logger.py:42] Received request cmpl-a38e1e0701314cafaa5efb7dc94fe4ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:09 [async_llm.py:261] Added request cmpl-a38e1e0701314cafaa5efb7dc94fe4ba-0.
INFO 03-02 01:44:10 [logger.py:42] Received request cmpl-bb7c314c902e4eeba405bbd43339443a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:10 [async_llm.py:261] Added request cmpl-bb7c314c902e4eeba405bbd43339443a-0.
INFO 03-02 01:44:11 [logger.py:42] Received request cmpl-fc1682545ac5433f9f1694d375c70386-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:11 [async_llm.py:261] Added request cmpl-fc1682545ac5433f9f1694d375c70386-0.
INFO 03-02 01:44:13 [logger.py:42] Received request cmpl-7289fd110df0493587f8bbd84143f0aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:13 [async_llm.py:261] Added request cmpl-7289fd110df0493587f8bbd84143f0aa-0.
INFO 03-02 01:44:14 [logger.py:42] Received request cmpl-8537dfa796454604a6976a3a541e8c3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:14 [async_llm.py:261] Added request cmpl-8537dfa796454604a6976a3a541e8c3a-0.
INFO 03-02 01:44:15 [logger.py:42] Received request cmpl-29151d45e89b45ce9947efd5149abf23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:15 [async_llm.py:261] Added request cmpl-29151d45e89b45ce9947efd5149abf23-0.
INFO 03-02 01:44:16 [logger.py:42] Received request cmpl-f9c735c18c5c489792f8f8797f3c7a03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:16 [async_llm.py:261] Added request cmpl-f9c735c18c5c489792f8f8797f3c7a03-0.
INFO 03-02 01:44:16 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.3%, Prefix cache hit rate: 0.0%
INFO 03-02 01:44:17 [logger.py:42] Received request cmpl-d2d229714bdc4877b48e8320882fea41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:17 [async_llm.py:261] Added request cmpl-d2d229714bdc4877b48e8320882fea41-0.
INFO 03-02 01:44:18 [logger.py:42] Received request cmpl-1c8918587477437588ea5ca935226678-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:18 [async_llm.py:261] Added request cmpl-1c8918587477437588ea5ca935226678-0.
INFO 03-02 01:44:19 [logger.py:42] Received request cmpl-773fedb60eb341adaba5a4a5949bfab6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:19 [async_llm.py:261] Added request cmpl-773fedb60eb341adaba5a4a5949bfab6-0.
INFO 03-02 01:44:20 [logger.py:42] Received request cmpl-f3ebbd5d41ed410d94beae52d5cefd19-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:20 [async_llm.py:261] Added request cmpl-f3ebbd5d41ed410d94beae52d5cefd19-0.
INFO 03-02 01:44:21 [logger.py:42] Received request cmpl-5b5c19ff50524b5cb1783fb70c06e3fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:21 [async_llm.py:261] Added request cmpl-5b5c19ff50524b5cb1783fb70c06e3fa-0.
INFO 03-02 01:44:22 [logger.py:42] Received request cmpl-974f37a3dede43898e31a646165e141b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:22 [async_llm.py:261] Added request cmpl-974f37a3dede43898e31a646165e141b-0.
INFO 03-02 01:44:23 [logger.py:42] Received request cmpl-aaa515a5acf54099b6079ab4179130ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:23 [async_llm.py:261] Added request cmpl-aaa515a5acf54099b6079ab4179130ef-0.
INFO 03-02 01:44:25 [logger.py:42] Received request cmpl-596c596946b34dcb8b1e72bb286760cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:25 [async_llm.py:261] Added request cmpl-596c596946b34dcb8b1e72bb286760cd-0.
INFO 03-02 01:44:26 [logger.py:42] Received request cmpl-11e0ab95b6df4c73a3dc1b5603fd29a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:26 [async_llm.py:261] Added request cmpl-11e0ab95b6df4c73a3dc1b5603fd29a9-0.
INFO 03-02 01:44:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:44:27 [logger.py:42] Received request cmpl-d60b8b096ee54b96a758f58a71105973-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:27 [async_llm.py:261] Added request cmpl-d60b8b096ee54b96a758f58a71105973-0.
INFO 03-02 01:44:28 [logger.py:42] Received request cmpl-fda0be9164264440b6cbdee05ef3f201-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:28 [async_llm.py:261] Added request cmpl-fda0be9164264440b6cbdee05ef3f201-0.
INFO 03-02 01:44:29 [logger.py:42] Received request cmpl-dac67396a7f14137a545d204be08a45a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:29 [async_llm.py:261] Added request cmpl-dac67396a7f14137a545d204be08a45a-0.
INFO 03-02 01:44:30 [logger.py:42] Received request cmpl-352b78a10f944b9295f7c01c8599b2fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:30 [async_llm.py:261] Added request cmpl-352b78a10f944b9295f7c01c8599b2fe-0.
INFO 03-02 01:44:31 [logger.py:42] Received request cmpl-083152ba545f49258847a5b9d02f302c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:31 [async_llm.py:261] Added request cmpl-083152ba545f49258847a5b9d02f302c-0.
INFO 03-02 01:44:32 [logger.py:42] Received request cmpl-6ee828dee6764a3e88beb1fa94092102-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:32 [async_llm.py:261] Added request cmpl-6ee828dee6764a3e88beb1fa94092102-0.
INFO 03-02 01:44:33 [logger.py:42] Received request cmpl-7287ac3ee67c4214b9390dfe7b7b49b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:33 [async_llm.py:261] Added request cmpl-7287ac3ee67c4214b9390dfe7b7b49b6-0.
INFO 03-02 01:44:34 [logger.py:42] Received request cmpl-b093764ddca54f19a88e4980213909e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:34 [async_llm.py:261] Added request cmpl-b093764ddca54f19a88e4980213909e3-0.
INFO 03-02 01:44:35 [logger.py:42] Received request cmpl-cf2d000450fe4d2b81a2a415a4807b99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:35 [async_llm.py:261] Added request cmpl-cf2d000450fe4d2b81a2a415a4807b99-0.
INFO 03-02 01:44:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:44:36 [logger.py:42] Received request cmpl-65bf18747ec6481ea96476dc2b9c207d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:36 [async_llm.py:261] Added request cmpl-65bf18747ec6481ea96476dc2b9c207d-0.
INFO 03-02 01:44:38 [logger.py:42] Received request cmpl-3618f6af960245dbbaca79953b286820-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:38 [async_llm.py:261] Added request cmpl-3618f6af960245dbbaca79953b286820-0.
INFO 03-02 01:44:39 [logger.py:42] Received request cmpl-11032979a6244f49b3b4a01f5326a8c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:39 [async_llm.py:261] Added request cmpl-11032979a6244f49b3b4a01f5326a8c7-0.
INFO 03-02 01:44:40 [logger.py:42] Received request cmpl-051de4aab7f4481db4e98159626b1011-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:40 [async_llm.py:261] Added request cmpl-051de4aab7f4481db4e98159626b1011-0.
INFO 03-02 01:44:41 [logger.py:42] Received request cmpl-640b21a46d1d451991c5f9489022a3d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:41 [async_llm.py:261] Added request cmpl-640b21a46d1d451991c5f9489022a3d4-0.
INFO 03-02 01:44:42 [logger.py:42] Received request cmpl-e67ab5b30362406cb0a55b42a6034079-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:42 [async_llm.py:261] Added request cmpl-e67ab5b30362406cb0a55b42a6034079-0.
INFO 03-02 01:44:43 [logger.py:42] Received request cmpl-66da3161884d4f1594970967d31f9cc5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:43 [async_llm.py:261] Added request cmpl-66da3161884d4f1594970967d31f9cc5-0.
INFO 03-02 01:44:44 [logger.py:42] Received request cmpl-80f0c3109ecc41b4a4685297257cb5ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:44 [async_llm.py:261] Added request cmpl-80f0c3109ecc41b4a4685297257cb5ec-0.
INFO 03-02 01:44:45 [logger.py:42] Received request cmpl-5f473b83ebd14b74879dad6bf1206110-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:45 [async_llm.py:261] Added request cmpl-5f473b83ebd14b74879dad6bf1206110-0.
INFO 03-02 01:44:46 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:44:46 [logger.py:42] Received request cmpl-73d54d499dba43c1b96276dd992ed004-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:46 [async_llm.py:261] Added request cmpl-73d54d499dba43c1b96276dd992ed004-0.
INFO 03-02 01:44:47 [logger.py:42] Received request cmpl-3ca7a99776584061a34a39402e6cf191-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:47 [async_llm.py:261] Added request cmpl-3ca7a99776584061a34a39402e6cf191-0.
INFO 03-02 01:44:48 [logger.py:42] Received request cmpl-29cfed97db7149fa9a7252b660c4b10c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:48 [async_llm.py:261] Added request cmpl-29cfed97db7149fa9a7252b660c4b10c-0.
INFO 03-02 01:44:49 [logger.py:42] Received request cmpl-f9363897806640b583b49a292297d8b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:49 [async_llm.py:261] Added request cmpl-f9363897806640b583b49a292297d8b4-0.
INFO 03-02 01:44:51 [logger.py:42] Received request cmpl-1ab4e4624a624093aa1a47689abc3f0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:51 [async_llm.py:261] Added request cmpl-1ab4e4624a624093aa1a47689abc3f0a-0.
INFO 03-02 01:44:52 [logger.py:42] Received request cmpl-fae0d37dd1104a30877fb6407cc71b58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:52 [async_llm.py:261] Added request cmpl-fae0d37dd1104a30877fb6407cc71b58-0.
INFO 03-02 01:44:53 [logger.py:42] Received request cmpl-5a434cde7e594eb7b3aea8cd4e541a28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:53 [async_llm.py:261] Added request cmpl-5a434cde7e594eb7b3aea8cd4e541a28-0.
INFO 03-02 01:44:54 [logger.py:42] Received request cmpl-5f3e7ed189a849a3a02f1f5cdeffa6ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:54 [async_llm.py:261] Added request cmpl-5f3e7ed189a849a3a02f1f5cdeffa6ea-0.
INFO 03-02 01:44:55 [logger.py:42] Received request cmpl-63a7d228f7814fa7a3de86a6043cb1ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:55 [async_llm.py:261] Added request cmpl-63a7d228f7814fa7a3de86a6043cb1ff-0.
INFO 03-02 01:44:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:44:56 [logger.py:42] Received request cmpl-2ae27d0f1e7a4c8daba8204ca7106470-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:56 [async_llm.py:261] Added request cmpl-2ae27d0f1e7a4c8daba8204ca7106470-0.
INFO 03-02 01:44:57 [logger.py:42] Received request cmpl-c317487f1a054652a337f849788dfc07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:57 [async_llm.py:261] Added request cmpl-c317487f1a054652a337f849788dfc07-0.
INFO 03-02 01:44:58 [logger.py:42] Received request cmpl-e5750322268a4c40b445c77d3c45b523-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:58 [async_llm.py:261] Added request cmpl-e5750322268a4c40b445c77d3c45b523-0.
INFO 03-02 01:44:59 [logger.py:42] Received request cmpl-48c8c5b645ba4b30a1901e1abf8cdacc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:44:59 [async_llm.py:261] Added request cmpl-48c8c5b645ba4b30a1901e1abf8cdacc-0.
INFO 03-02 01:45:00 [logger.py:42] Received request cmpl-7e4bbaaa740b4c5fa51e595a54687f0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:00 [async_llm.py:261] Added request cmpl-7e4bbaaa740b4c5fa51e595a54687f0e-0.
INFO 03-02 01:45:01 [logger.py:42] Received request cmpl-3450cc12e516441aa2d9c696ce396339-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:01 [async_llm.py:261] Added request cmpl-3450cc12e516441aa2d9c696ce396339-0.
INFO 03-02 01:45:02 [logger.py:42] Received request cmpl-3bfeca0520b84694a5af943653ce955d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:02 [async_llm.py:261] Added request cmpl-3bfeca0520b84694a5af943653ce955d-0.
INFO 03-02 01:45:04 [logger.py:42] Received request cmpl-b2ff8a0bce3a42a683456bb5c1852db5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:04 [async_llm.py:261] Added request cmpl-b2ff8a0bce3a42a683456bb5c1852db5-0.
INFO 03-02 01:45:05 [logger.py:42] Received request cmpl-e30d48607eaf4c0dbfa523d2e109261d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:05 [async_llm.py:261] Added request cmpl-e30d48607eaf4c0dbfa523d2e109261d-0.
INFO 03-02 01:45:06 [logger.py:42] Received request cmpl-5a3ccb2a233544aeb06dc02f57f42fa7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:06 [async_llm.py:261] Added request cmpl-5a3ccb2a233544aeb06dc02f57f42fa7-0.
INFO 03-02 01:45:06 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:45:07 [logger.py:42] Received request cmpl-85a8fb79939f45d082ab547b8803732c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:07 [async_llm.py:261] Added request cmpl-85a8fb79939f45d082ab547b8803732c-0.
INFO 03-02 01:45:08 [logger.py:42] Received request cmpl-8cd5e62e6bc04d0c8532ff58374c0ce8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:08 [async_llm.py:261] Added request cmpl-8cd5e62e6bc04d0c8532ff58374c0ce8-0.
INFO 03-02 01:45:09 [logger.py:42] Received request cmpl-7333b533c07f46d590b013ba9a4d78f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:09 [async_llm.py:261] Added request cmpl-7333b533c07f46d590b013ba9a4d78f9-0.
INFO 03-02 01:45:10 [logger.py:42] Received request cmpl-8c8dcaf5b4a3450fa85509a7e4355939-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:10 [async_llm.py:261] Added request cmpl-8c8dcaf5b4a3450fa85509a7e4355939-0.
INFO 03-02 01:45:11 [logger.py:42] Received request cmpl-c92bbce556564b6181e3476e4a89a095-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:11 [async_llm.py:261] Added request cmpl-c92bbce556564b6181e3476e4a89a095-0.
INFO 03-02 01:45:12 [logger.py:42] Received request cmpl-3ab696947e7040a8804b4b47d56d45ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:12 [async_llm.py:261] Added request cmpl-3ab696947e7040a8804b4b47d56d45ca-0.
INFO 03-02 01:45:13 [logger.py:42] Received request cmpl-dfca1d69b06445a4a329eba86665ce8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:13 [async_llm.py:261] Added request cmpl-dfca1d69b06445a4a329eba86665ce8b-0.
INFO 03-02 01:45:14 [logger.py:42] Received request cmpl-5aa7fca0fc3c408ca92d05606d54be39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:14 [async_llm.py:261] Added request cmpl-5aa7fca0fc3c408ca92d05606d54be39-0.
INFO 03-02 01:45:15 [logger.py:42] Received request cmpl-b63b44d556bb43ceab24c8617d10de6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:15 [async_llm.py:261] Added request cmpl-b63b44d556bb43ceab24c8617d10de6a-0.
INFO 03-02 01:45:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:45:17 [logger.py:42] Received request cmpl-0ab6f9e5fd3c47d78f766ec9ff5d6c1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:17 [async_llm.py:261] Added request cmpl-0ab6f9e5fd3c47d78f766ec9ff5d6c1b-0.
INFO 03-02 01:45:18 [logger.py:42] Received request cmpl-cc27927d9e3d448184a0c410cdced9c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:18 [async_llm.py:261] Added request cmpl-cc27927d9e3d448184a0c410cdced9c8-0.
INFO 03-02 01:45:19 [logger.py:42] Received request cmpl-26c1ce4f9b6b4ecdb078f451fb21c592-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:19 [async_llm.py:261] Added request cmpl-26c1ce4f9b6b4ecdb078f451fb21c592-0.
INFO 03-02 01:45:20 [logger.py:42] Received request cmpl-95af50c8190e425c9625502f87fee87c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:20 [async_llm.py:261] Added request cmpl-95af50c8190e425c9625502f87fee87c-0.
INFO 03-02 01:45:21 [logger.py:42] Received request cmpl-b265e7034ddb473c8df3254afc0b2726-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:21 [async_llm.py:261] Added request cmpl-b265e7034ddb473c8df3254afc0b2726-0.
INFO 03-02 01:45:22 [logger.py:42] Received request cmpl-50de946e199a43d6b4de887a22304399-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:22 [async_llm.py:261] Added request cmpl-50de946e199a43d6b4de887a22304399-0.
INFO 03-02 01:45:23 [logger.py:42] Received request cmpl-0fb3fd6819b948a78697cbf5a0e0b2e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:23 [async_llm.py:261] Added request cmpl-0fb3fd6819b948a78697cbf5a0e0b2e2-0.
INFO 03-02 01:45:24 [logger.py:42] Received request cmpl-64a6659f74714896b86cafc8db9feb68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:24 [async_llm.py:261] Added request cmpl-64a6659f74714896b86cafc8db9feb68-0.
INFO 03-02 01:45:25 [logger.py:42] Received request cmpl-c8d84221b82146eba776f2feff736452-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:25 [async_llm.py:261] Added request cmpl-c8d84221b82146eba776f2feff736452-0.
INFO 03-02 01:45:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:45:26 [logger.py:42] Received request cmpl-60fe6d2437404afeb63bd3f7188a0013-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:26 [async_llm.py:261] Added request cmpl-60fe6d2437404afeb63bd3f7188a0013-0.
INFO 03-02 01:45:27 [logger.py:42] Received request cmpl-2c4c347a7200482fae287f55251c66aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:27 [async_llm.py:261] Added request cmpl-2c4c347a7200482fae287f55251c66aa-0.
INFO 03-02 01:45:28 [logger.py:42] Received request cmpl-7c81128bfea341efbd4a9e7bc5ebbae2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:28 [async_llm.py:261] Added request cmpl-7c81128bfea341efbd4a9e7bc5ebbae2-0.
INFO 03-02 01:45:30 [logger.py:42] Received request cmpl-427a900fcf9b45ac9814881de5f150c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:30 [async_llm.py:261] Added request cmpl-427a900fcf9b45ac9814881de5f150c6-0.
INFO 03-02 01:45:31 [logger.py:42] Received request cmpl-e94a9effd77e4d43a3bbd376bb508b36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:31 [async_llm.py:261] Added request cmpl-e94a9effd77e4d43a3bbd376bb508b36-0.
INFO 03-02 01:45:32 [logger.py:42] Received request cmpl-3163f9ade79d41b3b93c7379e7772cf7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:32 [async_llm.py:261] Added request cmpl-3163f9ade79d41b3b93c7379e7772cf7-0.
INFO 03-02 01:45:33 [logger.py:42] Received request cmpl-f9ead56062d44fc893ca16b5cd5dc11f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:33 [async_llm.py:261] Added request cmpl-f9ead56062d44fc893ca16b5cd5dc11f-0.
INFO 03-02 01:45:34 [logger.py:42] Received request cmpl-13d2bbe51756418788e363e17dbe313a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:34 [async_llm.py:261] Added request cmpl-13d2bbe51756418788e363e17dbe313a-0.
INFO 03-02 01:45:35 [logger.py:42] Received request cmpl-d0b8f981316446d98fd817393c94d7f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:35 [async_llm.py:261] Added request cmpl-d0b8f981316446d98fd817393c94d7f0-0.
INFO 03-02 01:45:36 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:45:36 [logger.py:42] Received request cmpl-3c552ce7d8874244b7cc1d57eb41949d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:36 [async_llm.py:261] Added request cmpl-3c552ce7d8874244b7cc1d57eb41949d-0.
INFO 03-02 01:45:37 [logger.py:42] Received request cmpl-975a5fce1604486aaebae95be8a912fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:37 [async_llm.py:261] Added request cmpl-975a5fce1604486aaebae95be8a912fb-0.
INFO 03-02 01:45:38 [logger.py:42] Received request cmpl-81513d7629514d18b32a9e4ebc40ef46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:38 [async_llm.py:261] Added request cmpl-81513d7629514d18b32a9e4ebc40ef46-0.
INFO 03-02 01:45:39 [logger.py:42] Received request cmpl-b548cd6c06c04cc09f0e55f43d1d9d6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:39 [async_llm.py:261] Added request cmpl-b548cd6c06c04cc09f0e55f43d1d9d6a-0.
INFO 03-02 01:45:40 [logger.py:42] Received request cmpl-d2f59252555540cbabe403d80a430561-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:40 [async_llm.py:261] Added request cmpl-d2f59252555540cbabe403d80a430561-0.
INFO 03-02 01:45:42 [logger.py:42] Received request cmpl-524897352dcc42b8962d7eca6eae1737-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:42 [async_llm.py:261] Added request cmpl-524897352dcc42b8962d7eca6eae1737-0.
INFO 03-02 01:45:43 [logger.py:42] Received request cmpl-5006d671bbbe4ff484352bbbdaf28ffe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:43 [async_llm.py:261] Added request cmpl-5006d671bbbe4ff484352bbbdaf28ffe-0.
INFO 03-02 01:45:44 [logger.py:42] Received request cmpl-455606c7a5f94c5bb13b6cc5cc882cfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:44 [async_llm.py:261] Added request cmpl-455606c7a5f94c5bb13b6cc5cc882cfa-0.
INFO 03-02 01:45:45 [logger.py:42] Received request cmpl-1478f383d324476382ed91cd1afe9124-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:45 [async_llm.py:261] Added request cmpl-1478f383d324476382ed91cd1afe9124-0.
INFO 03-02 01:45:46 [logger.py:42] Received request cmpl-b2f9cfa604f942b39f1bf86ab0ed6167-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:46 [async_llm.py:261] Added request cmpl-b2f9cfa604f942b39f1bf86ab0ed6167-0.
INFO 03-02 01:45:46 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:45:47 [logger.py:42] Received request cmpl-d1370ff1d9bb44ffbc1bc81a6f8f2bd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:47 [async_llm.py:261] Added request cmpl-d1370ff1d9bb44ffbc1bc81a6f8f2bd3-0.
INFO 03-02 01:45:48 [logger.py:42] Received request cmpl-acba429feee241b38a604a2959446bb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:48 [async_llm.py:261] Added request cmpl-acba429feee241b38a604a2959446bb3-0.
INFO 03-02 01:45:49 [logger.py:42] Received request cmpl-d103b34686e744dab58460b9697e7aff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:49 [async_llm.py:261] Added request cmpl-d103b34686e744dab58460b9697e7aff-0.
INFO 03-02 01:45:50 [logger.py:42] Received request cmpl-95ed6f2cbea04cf8853b02dd5498278c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:50 [async_llm.py:261] Added request cmpl-95ed6f2cbea04cf8853b02dd5498278c-0.
INFO 03-02 01:45:51 [logger.py:42] Received request cmpl-000bf465479545198aa71baefccc3946-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:51 [async_llm.py:261] Added request cmpl-000bf465479545198aa71baefccc3946-0.
INFO 03-02 01:45:52 [logger.py:42] Received request cmpl-279aa2c49b51483eb94e913a96db413f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:52 [async_llm.py:261] Added request cmpl-279aa2c49b51483eb94e913a96db413f-0.
INFO 03-02 01:45:53 [logger.py:42] Received request cmpl-341ec9f372d948eaa68a0483aea7dd70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:53 [async_llm.py:261] Added request cmpl-341ec9f372d948eaa68a0483aea7dd70-0.
INFO 03-02 01:45:55 [logger.py:42] Received request cmpl-2ad43998a1e3423c8d6597de76c1fc4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:55 [async_llm.py:261] Added request cmpl-2ad43998a1e3423c8d6597de76c1fc4b-0.
INFO 03-02 01:45:56 [logger.py:42] Received request cmpl-e12c591afe9d4ef98f84833dd3a1c4da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:56 [async_llm.py:261] Added request cmpl-e12c591afe9d4ef98f84833dd3a1c4da-0.
INFO 03-02 01:45:56 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:45:57 [logger.py:42] Received request cmpl-261b992dc86440c5a0e67564f64d4928-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:57 [async_llm.py:261] Added request cmpl-261b992dc86440c5a0e67564f64d4928-0.
INFO 03-02 01:45:58 [logger.py:42] Received request cmpl-a9b22bdadbb546dba71c3dbceed4c23d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:58 [async_llm.py:261] Added request cmpl-a9b22bdadbb546dba71c3dbceed4c23d-0.
INFO 03-02 01:45:59 [logger.py:42] Received request cmpl-0c8690e03fc443f6a87137e8249974a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:45:59 [async_llm.py:261] Added request cmpl-0c8690e03fc443f6a87137e8249974a1-0.
INFO 03-02 01:46:00 [logger.py:42] Received request cmpl-948ea13ffa5d4e92be1ac82d7cb81428-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:00 [async_llm.py:261] Added request cmpl-948ea13ffa5d4e92be1ac82d7cb81428-0.
INFO 03-02 01:46:01 [logger.py:42] Received request cmpl-a388aefcda6f4ab0985d6844f38f7f90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:01 [async_llm.py:261] Added request cmpl-a388aefcda6f4ab0985d6844f38f7f90-0.
INFO 03-02 01:46:02 [logger.py:42] Received request cmpl-9d6a1617cf504ba085858bb55e4a5536-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:02 [async_llm.py:261] Added request cmpl-9d6a1617cf504ba085858bb55e4a5536-0.
INFO 03-02 01:46:03 [logger.py:42] Received request cmpl-fb03969dff584de8bf19e73a59496676-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:03 [async_llm.py:261] Added request cmpl-fb03969dff584de8bf19e73a59496676-0.
INFO 03-02 01:46:04 [logger.py:42] Received request cmpl-dd3d59acfa554a689716439e58b9f24a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:04 [async_llm.py:261] Added request cmpl-dd3d59acfa554a689716439e58b9f24a-0.
INFO 03-02 01:46:05 [logger.py:42] Received request cmpl-afefe6f653bc4f31adb6569eadc333cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:05 [async_llm.py:261] Added request cmpl-afefe6f653bc4f31adb6569eadc333cd-0.
INFO 03-02 01:46:06 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:46:06 [logger.py:42] Received request cmpl-6173250ea10c4e90adef171c2f5d95fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:06 [async_llm.py:261] Added request cmpl-6173250ea10c4e90adef171c2f5d95fd-0.
INFO 03-02 01:46:08 [logger.py:42] Received request cmpl-40c1a6513496423d8e559c55074d4488-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:08 [async_llm.py:261] Added request cmpl-40c1a6513496423d8e559c55074d4488-0.
INFO 03-02 01:46:09 [logger.py:42] Received request cmpl-a88f8c4ab9f74f2a823f0d834f84c4a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:09 [async_llm.py:261] Added request cmpl-a88f8c4ab9f74f2a823f0d834f84c4a3-0.
INFO 03-02 01:46:10 [logger.py:42] Received request cmpl-f312ec58e9e24d72b1b778f200d06643-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:10 [async_llm.py:261] Added request cmpl-f312ec58e9e24d72b1b778f200d06643-0.
INFO 03-02 01:46:11 [logger.py:42] Received request cmpl-6c4e60631f9945e594058593336bcce2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:11 [async_llm.py:261] Added request cmpl-6c4e60631f9945e594058593336bcce2-0.
INFO 03-02 01:46:12 [logger.py:42] Received request cmpl-514d7eadc258414781a944fbeba4e379-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:12 [async_llm.py:261] Added request cmpl-514d7eadc258414781a944fbeba4e379-0.
INFO 03-02 01:46:13 [logger.py:42] Received request cmpl-f52077766fdd46d3a2f2b86425d3b918-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:13 [async_llm.py:261] Added request cmpl-f52077766fdd46d3a2f2b86425d3b918-0.
INFO 03-02 01:46:14 [logger.py:42] Received request cmpl-88e73d5977274c65aa4610a0dca95262-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:14 [async_llm.py:261] Added request cmpl-88e73d5977274c65aa4610a0dca95262-0.
INFO 03-02 01:46:15 [logger.py:42] Received request cmpl-c6d8d82957c1414182a2c276dab9cd54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:15 [async_llm.py:261] Added request cmpl-c6d8d82957c1414182a2c276dab9cd54-0.
INFO 03-02 01:46:16 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:46:16 [logger.py:42] Received request cmpl-a33861a4ffb842c3b6fd461488dfc9fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:16 [async_llm.py:261] Added request cmpl-a33861a4ffb842c3b6fd461488dfc9fd-0.
INFO 03-02 01:46:17 [logger.py:42] Received request cmpl-bcfe4fe1b8b74d9cba080f5a69e4cffb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:17 [async_llm.py:261] Added request cmpl-bcfe4fe1b8b74d9cba080f5a69e4cffb-0.
INFO 03-02 01:46:18 [logger.py:42] Received request cmpl-0d57c16cdb9043f0a0a5b710b5414a72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:18 [async_llm.py:261] Added request cmpl-0d57c16cdb9043f0a0a5b710b5414a72-0.
INFO 03-02 01:46:19 [logger.py:42] Received request cmpl-67128afd20144e27a73aac134275602d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:19 [async_llm.py:261] Added request cmpl-67128afd20144e27a73aac134275602d-0.
INFO 03-02 01:46:21 [logger.py:42] Received request cmpl-1422b69476ab4606abf95a325e461937-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:21 [async_llm.py:261] Added request cmpl-1422b69476ab4606abf95a325e461937-0.
INFO 03-02 01:46:22 [logger.py:42] Received request cmpl-c269e14ce7874d82bd27653eb7deec6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:22 [async_llm.py:261] Added request cmpl-c269e14ce7874d82bd27653eb7deec6d-0.
INFO 03-02 01:46:23 [logger.py:42] Received request cmpl-c427829f251d474f80b76bc73f58fde7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:23 [async_llm.py:261] Added request cmpl-c427829f251d474f80b76bc73f58fde7-0.
INFO 03-02 01:46:24 [logger.py:42] Received request cmpl-e51efedacdef4ab68473a09d987d6b42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:24 [async_llm.py:261] Added request cmpl-e51efedacdef4ab68473a09d987d6b42-0.
INFO 03-02 01:46:25 [logger.py:42] Received request cmpl-de63b40deb0c43af9b70f5b714c06c7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:25 [async_llm.py:261] Added request cmpl-de63b40deb0c43af9b70f5b714c06c7b-0.
INFO 03-02 01:46:26 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:46:26 [logger.py:42] Received request cmpl-90d37bd6ef494e108839e2eedb8dec20-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:26 [async_llm.py:261] Added request cmpl-90d37bd6ef494e108839e2eedb8dec20-0.
INFO 03-02 01:46:27 [logger.py:42] Received request cmpl-7b2db8644d0f4393af2ecd59cbda333a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:27 [async_llm.py:261] Added request cmpl-7b2db8644d0f4393af2ecd59cbda333a-0.
INFO 03-02 01:46:28 [logger.py:42] Received request cmpl-6e022cf9d4fa4724a56767003bdf6ada-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:28 [async_llm.py:261] Added request cmpl-6e022cf9d4fa4724a56767003bdf6ada-0.
INFO 03-02 01:46:29 [logger.py:42] Received request cmpl-bdbb337bda3b4b39aeb3140f20c9d06e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:29 [async_llm.py:261] Added request cmpl-bdbb337bda3b4b39aeb3140f20c9d06e-0.
INFO 03-02 01:46:30 [logger.py:42] Received request cmpl-cf37143cc04243299507bcdb858d7e8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:30 [async_llm.py:261] Added request cmpl-cf37143cc04243299507bcdb858d7e8f-0.
INFO 03-02 01:46:31 [logger.py:42] Received request cmpl-b53952771efc41b2912404dbfacf5da0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:31 [async_llm.py:261] Added request cmpl-b53952771efc41b2912404dbfacf5da0-0.
INFO 03-02 01:46:32 [logger.py:42] Received request cmpl-2f2a9351651e4690980bde29c151055f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:32 [async_llm.py:261] Added request cmpl-2f2a9351651e4690980bde29c151055f-0.
INFO 03-02 01:46:34 [logger.py:42] Received request cmpl-af3800c368d04d65bc24df17ae911c31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:34 [async_llm.py:261] Added request cmpl-af3800c368d04d65bc24df17ae911c31-0.
INFO 03-02 01:46:35 [logger.py:42] Received request cmpl-be547a3608bc4461b0d5df2afa7f629c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:35 [async_llm.py:261] Added request cmpl-be547a3608bc4461b0d5df2afa7f629c-0.
INFO 03-02 01:46:36 [logger.py:42] Received request cmpl-67a45bb414d7492db99393826fc2ad9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:36 [async_llm.py:261] Added request cmpl-67a45bb414d7492db99393826fc2ad9b-0.
INFO 03-02 01:46:36 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.7%, Prefix cache hit rate: 0.0%
INFO 03-02 01:46:37 [logger.py:42] Received request cmpl-d7ecf0c412cb4c05b752891ca6efe28d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:37 [async_llm.py:261] Added request cmpl-d7ecf0c412cb4c05b752891ca6efe28d-0.
INFO 03-02 01:46:38 [logger.py:42] Received request cmpl-5dc7b965afe142bd884ca41919b19152-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 5986, 496, 3823, 4260, 8417, 236761], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:46:38 [async_llm.py:261] Added request cmpl-5dc7b965afe142bd884ca41919b19152-0.