CLI Arguments
Cli arguments, --host, --port, --num_workers
--hostโ
- Default: 
'0.0.0.0' - The host for the server to listen on.
 - Usage:
litellm --host 127.0.0.1 - Usage - set Environment Variable: 
HOST 
export HOST=127.0.0.1
litellm
--portโ
- Default: 
4000 - The port to bind the server to.
 - Usage:
litellm --port 8080 - Usage - set Environment Variable: 
PORTexport PORT=8080
litellm 
--num_workersโ
- Default: 
1 - The number of uvicorn workers to spin up.
 - Usage:
litellm --num_workers 4 - Usage - set Environment Variable: 
NUM_WORKERSexport NUM_WORKERS=4
litellm 
--api_baseโ
- Default: 
None - The API base for the model litellm should call.
 - Usage:
litellm --model huggingface/tinyllama --api_base https://k58ory32yinf1ly0.us-east-1.aws.endpoints.huggingface.cloud 
--api_versionโ
- Default: 
None - For Azure services, specify the API version.
 - Usage:
litellm --model azure/gpt-deployment --api_version 2023-08-01 --api_base https://<your api base>" 
--model or -mโ
- Default: 
None - The model name to pass to Litellm.
 - Usage:
litellm --model gpt-3.5-turbo 
--testโ
- Type: 
bool(Flag) - Proxy chat completions URL to make a test request.
 - Usage:
litellm --test 
--healthโ
- Type: 
bool(Flag) - Runs a health check on all models in config.yaml
 - Usage:
litellm --health 
--aliasโ
- Default: 
None - An alias for the model, for user-friendly reference.
 - Usage:
litellm --alias my-gpt-model 
--debugโ
- Default: 
False - Type: 
bool(Flag) - Enable debugging mode for the input.
 - Usage:
litellm --debug - Usage - set Environment Variable: 
DEBUGexport DEBUG=True
litellm 
--detailed_debugโ
- Default: 
False - Type: 
bool(Flag) - Enable debugging mode for the input.
 - Usage:
litellm --detailed_debug - Usage - set Environment Variable: 
DETAILED_DEBUGexport DETAILED_DEBUG=True
litellm 
--temperatureโ
- Default: 
None - Type: 
float - Set the temperature for the model.
 - Usage:
litellm --temperature 0.7 
--max_tokensโ
- Default: 
None - Type: 
int - Set the maximum number of tokens for the model output.
 - Usage:
litellm --max_tokens 50 
--request_timeoutโ
- Default: 
6000 - Type: 
int - Set the timeout in seconds for completion calls.
 - Usage:
litellm --request_timeout 300 
--drop_paramsโ
- Type: 
bool(Flag) - Drop any unmapped params.
 - Usage:
litellm --drop_params 
--add_function_to_promptโ
- Type: 
bool(Flag) - If a function passed but unsupported, pass it as a part of the prompt.
 - Usage:
litellm --add_function_to_prompt 
--configโ
- Configure Litellm by providing a configuration file path.
 - Usage:
litellm --config path/to/config.yaml 
--telemetryโ
- Default: 
True - Type: 
bool - Help track usage of this feature.
 - Usage:
litellm --telemetry False 
--log_configโ
- Default: 
None - Type: 
str - Specify a log configuration file for uvicorn.
 - Usage:
litellm --log_config path/to/log_config.conf 
--skip_server_startupโ
- Default: 
False - Type: 
bool(Flag) - Skip starting the server after setup (useful for DB migrations only).
 - Usage:
litellm --skip_server_startup