Skip to main content

Config

The config module provides an interface for reading and writing configuration values. It includes functions to load, retrieve, and update configuration values for the Prometheux platform.


Configuration Functions

Get Configuration Value

Retrieves a value from the configuration using the specified key.

import prometheux_chain as px

token = px.config.get("PMTX_TOKEN")
print(token)

Function Signature

def get(key, default=None)

Parameters

ParameterTypeRequiredDescription
keystrYesThe key for the configuration value to retrieve
defaultanyNoA default value to return if the key is not found

Returns

The value associated with the specified key, or None if the key does not exist.


Set Configuration Value

Sets a configuration value for the specified key.

import prometheux_chain as px

px.config.set("PMTX_TOKEN", "new_token")

Function Signature

def set(key, value)

Parameters

ParameterTypeRequiredDescription
keystrYesThe key for the configuration value to set
valueanyYesThe value to associate with the key

Update Multiple Configuration Values

Update multiple configuration values at once from a dictionary.

import prometheux_chain as px

px.config.update_config({
"PMTX_TOKEN": "updated_token",
"LLM_API_KEY": "new_api_key"
})

Function Signature

def update_config(updates)

Parameters

ParameterTypeRequiredDescription
updatesdictYesA dictionary containing keys and values to update

Authentication Token

The PMTX_TOKEN can be configured in two ways:

Using Environment Variables

import os

os.environ['PMTX_TOKEN'] = 'pmtx_token'

Using the SDK Configuration

import prometheux_chain as px

px.config.set("PMTX_TOKEN", "pmtx_token")

Configuring LLMs

The Prometheux Chain exploits LLMs to generate more human-readable explanations, translations of Vadalog rules, graph rag and LLM output validation tasks. By default, no LLM is configured.

Setting up the LLM

To enable and configure an LLM, such as OpenAI's GPT, follow these steps:

1. Set the LLM Provider:

import prometheux_chain as px

px.config.set("LLM_PROVIDER", "OpenAI")

2. Provide the API Key:

px.config.set("LLM_API_KEY", "your_api_key")

Default Configurations

The SDK uses the following default configurations:

KeyDefault ValueDescription
PMTX_TOKENNonePrometheux authentication token
LLM_API_KEYNoneLLM API key
LLM_PROVIDEROpenAILLM provider name
LLM_VERSIONgpt-4oLLM model version
LLM_TEMPERATURE0.50LLM temperature setting
LLM_MAX_TOKENS2000Maximum tokens for LLM responses
EMBEDDING_MODEL_VERSIONtext-embedding-3-largeEmbedding model version
EMBEDDING_DIMENSIONS2048Embedding dimensions

User Management

Get User Role

Retrieve the current user's role.

import prometheux_chain as px

role = px.get_user_role()
print(f"User role: {role}")

Get Usage Status

Get current API usage statistics and limits.

import prometheux_chain as px

usage = px.get_usage_status()
print(f"LLM usage: {usage['llm_usage']}")
print(f"Embedding usage: {usage['embedding_usage']}")

Complete Configuration Example

import prometheux_chain as px
import os

# Set up authentication via environment variable
os.environ['PMTX_TOKEN'] = 'my_pmtx_token'

# Or set via SDK
px.config.set("PMTX_TOKEN", "my_pmtx_token")

# Configure the backend URL
px.config.set('JARVISPY_URL', "https://api.prometheux.ai/jarvispy/my-org/my-user")

# Configure LLM settings
px.config.update_config({
"LLM_PROVIDER": "OpenAI",
"LLM_API_KEY": "your_openai_api_key",
"LLM_VERSION": "gpt-4o",
"LLM_TEMPERATURE": 0.7,
"LLM_MAX_TOKENS": 4000
})

# Verify configuration
print(f"API URL: {px.config.get('JARVISPY_URL')}")
print(f"LLM Provider: {px.config.get('LLM_PROVIDER')}")