Skip to content

llm_chat

thml.llm_chat ¤

This module contains the chatbot classes and methods to work with text generation models.

Modules:

post_api ¤

This module contains the chatbot classes and methods to work with text generation models.

Modules:

chat_google ¤

Build a chatbot using Google's Gemini API

REF: - Python API: https://ai.google.dev/tutorials/python_quickstart - Prompt examples: https://ai.google.dev/docs/prompt_best_practices - Palm vs Gemini: https://ai.google.dev/docs/migration_guide

Classes:

Google(api_key=None, **kwargs) ¤

Class for chatbot using Google's Gemini API via google.generativeai package

Parameters:

  • api_key (str, default: None ) –

    The OpenAI API key.

Other Parameters:

  • model (str = 'gemini-pro') –

    The model to use for the chat client.

  • temperature (float = 0.7) –

    The temperature to use for the chat client.

  • top_p (float = 1) –

    An alternative to sampling with temperature.

  • max_tokens (int = 8096) –

    The maximum number of tokens to generate in the response.

Methods:

  • ask

    Ask Google Gemini a question and return the answer.

Attributes:

avail_models = avail_models instance-attribute ¤
params = kwargs instance-attribute ¤
ask(prompt='hello my friend') ¤

Ask Google Gemini a question and return the answer.

Parameters:

  • prompt (str, default: 'hello my friend' ) –

    The question or prompt to ask the chatbot.

Returns:

  • text ( str ) –

    The answer to the question.

chat_gpt4free ¤

Classes:

  • FreeChat

    Class for chatbot using reverse enginerring models.

  • FreeImage

    Class for image generation using reverse-enginered models.

FreeChat(**kwargs: Any) ¤

Bases: _Base

Class for chatbot using reverse enginerring models.

Other Parameters:

  • provider (str = None) –

    The provider of the model. If None, the best provider will be used.

  • api_key (str = None) –

    The API key for the provider.

  • model (str = 'gpt-4') –

    The model to use.

  • temperature (float = 0.7) –

    The temperature of the model.

  • top_p (float = 1) –

    The top_p of the model.

  • max_tokens (int = 8096) –

    The max tokens of the model.

  • system_prompt (str = '') –

    The system prompt of the model.

Methods:

  • ask

    Ask the chatbot a question.

Attributes:

params = _params(**kwargs) instance-attribute ¤
avail_models = avail_models instance-attribute ¤
avail_providers = self._avail_providers() instance-attribute ¤
ask(prompt: str) -> str ¤

Ask the chatbot a question. Args: prompt (str): The input string for the chatbot.

Returns:

  • text ( str ) –

    The answer to the question.

FreeImage() ¤

Class for image generation using reverse-enginered models.

chat_openai ¤

Using chatGPT API to build a chatbot

Implementation following this repo: https://github.com/stancsz/chatgpt/blob/master/ChatGPT.py

Terms: - OpenAI's text generation models (often called generative pre-trained transformers or large language models) - The inputs to these models are also referred to as "prompts".

REF: - Refer to file: src_thatool\devtools\dev_chatGPT\chat_API/chat_Copilot2GPT.ipynb - openai docs: https://platform.openai.com/docs/guides/text-generation/text-generation-models - openai repo: https://github.com/openai/openai-python - prompt examples: https://github.com/f/awesome-chatgpt-prompts - openai examples: https://platform.openai.com/examples - inherent class in python: https://stackoverflow.com/questions/9575409/calling-parent-class-init-with-multiple-inheritance-whats-the-right-way - super() in python: https://stackoverflow.com/questions/34550425/how-to-initialize-subclass-paramseters-in-python-using-super - create chatbot using openai API: https://medium.com/data-professor/beginners-guide-to-openai-api-a0420bc58ee5 - OpenAI API tips: https://arize.com/blog-course/mastering-openai-api-tips-and-tricks/

Classes:

  • BaseChat

    Base class for chatbot, to define common attributes and methods for chatbot

  • Openai

    Class for chatbot using OpenAI API via openai package

  • Post

    Class for chatbot using OpenAI API via POST request

BaseChat ¤

Base class for chatbot, to define common attributes and methods for chatbot

Methods:

save_history(prompt, response) ¤
export_history(filename='chat_history.txt') ¤
load_history(filename='chat_history.txt') ¤
Openai(service: str = 'openai', **kwargs: Any) ¤

Bases: BaseChat

Class for chatbot using OpenAI API via openai package

Parameters:

  • service (str, default: 'openai' ) –

    The service to use for the chat client (preset of base_url). Available services are: openai, copilot, local_gpt4all

Other Parameters:

  • base_url (str) –

    The OpenAI API base URL. Presetted based on the service.

  • api_key (str) –

    The OpenAI API key. Presetted based on the service.

  • model (str = 'gpt-4') –

    The model to use for the chat client. All models can be found at the OpenAI site. Only 2 models 'gpt-4' and 'gpt-3.5-turbo' for copilot.

  • temperature (float = 0.7) –

    The temperature to use for the chat client. The temperature is a value between 0 and 1. Lower temperatures will cause the model to repeat itself more often, while higher temperatures will increase the model's diversity of responses. Use either temperature or top_p, but not both.

  • top_p (float = 1) –

    An alternative to sampling with temperature. The top_p is a value between 0 and 1. Use either temperature or top_p, but not both.

  • max_tokens (int = 8096) –

    The maximum number of tokens to generate in the response.

  • stream (bool = False) –

    Whether to stream the response or not.

  • system_prompt (str = '') –

    The prompt to use for the system.

Methods:

Attributes:

params = _params(service, **kwargs) instance-attribute ¤
avail_models = avail_models instance-attribute ¤
ask(prompt='hello', **kwargs: Any) -> str ¤

Ask GPT-4 a question and return the answer. Use new openai API

Parameters:

  • prompt (str, default: 'hello' ) –

    The question to ask GPT-4.

Other Parameters:

  • save_history (bool = False) –

    Whether to save the question and answer to the chat history.

  • use_history (bool = False) –

    Whether to use the chat history in the current request.

Returns:

  • text ( str ) –

    The answer to the question.

save_history(prompt, response) ¤
export_history(filename='chat_history.txt') ¤
load_history(filename='chat_history.txt') ¤
Post(service: str = 'copilot', **kwargs: Any) ¤

Class for chatbot using OpenAI API via POST request

Parameters:

  • **kwargs (Any, default: {} ) –

    See Openai class for the arguments.

Refs

Methods:

Attributes:

params = _params(service, **kwargs) instance-attribute ¤
avail_models = avail_models instance-attribute ¤
ask(prompt='hello') ¤

web_playwright ¤

Modules:

chatgpt ¤

Classes:

WebChatgpt(cookie_file: str = None, proxy: str = None, chat_id: str = 'temporary') ¤

Bases: WebBase

Interacte with chatgpt web

proxy (str): proxy server. e.g., "http://something.com:8080"
chat_id (str): "id", "last", "temporary", "new".
    If "id", use `chat_id`. If `chat_id` is not be found on the web, fallback to `chat_page="temporary"`.
    If "last", use last chat.
    If "temporary", use temporary chat (need to go wedsite to turn on this feature at the first time login of an account).
    If "new", start new chat.

Methods:

Attributes:

base_url = 'https://chatgpt.com' instance-attribute ¤
cookie_file = cookie_file instance-attribute ¤
chat_id = chat_id instance-attribute ¤
login = False instance-attribute ¤
prompt_textarea = self.page.get_by_placeholder('Message ChatGPT') instance-attribute ¤
send_button = self.page.locator('button.mb-1.me-1.h-8.w-8') instance-attribute ¤
stop_button = self.page.locator('button.mb-1.me-1.h-8.w-8').locator('rect') instance-attribute ¤
browser_kwargs = {} instance-attribute ¤
context_kwargs = {} instance-attribute ¤
device: str = None instance-attribute ¤
page = None instance-attribute ¤
send_count = 0 instance-attribute ¤
send_prompt(prompt: str = 'Hello, are you gpt-4o?') async ¤

Submit prompt text

Parameters:

  • prompt (str, default: 'Hello, are you gpt-4o?' ) –

    prompt text

get_last_ai_message() async ¤

Get the last AI message

get_all_messages() -> list[dict] async ¤

Get all messages by user and AI

get_all_chat_id() async ¤

Get all conversation ids

get_last_chat_id() async ¤

Get all conversation ids

ask(prompt: str = 'Hello, are you gpt-4o?') ¤

Ask and get reponse from web

Parameters:

  • prompt (str, default: 'Hello, are you gpt-4o?' ) –

    prompt text

Returns:

  • str

    response from AI

get_chat_history() ¤

alias of get_all_messages() method, but in sync mode

close() ¤

Alias of _close_page() method, but in sync mode

claude ¤

Classes:

WebClaude(cookie_file: str = None, proxy: str = None, chat_id: str = 'last') ¤

Bases: WebBase

Interacte with chatgpt web

proxy (str): proxy server. e.g., "http://something.com:8080"
chat_id (str): "id", "last", "new".
    If "id", use `chat_id`. If `chat_id` is not be found on the web, fallback to `chat_id="last"`.
    If "last", use last_id.
    If "new", start new chat.

Methods:

Attributes:

base_url = 'https://claude.ai' instance-attribute ¤
cookie_file = cookie_file instance-attribute ¤
chat_id = chat_id instance-attribute ¤
prompt_textarea = self.page.get_by_label('Write your prompt to Claude').locator('p') instance-attribute ¤
send_button = self.page.get_by_role('button', name='Send Message') instance-attribute ¤
stop_button = self.page.get_by_role('button', name='Stop Response') instance-attribute ¤
browser_kwargs = {} instance-attribute ¤
context_kwargs = {} instance-attribute ¤
device: str = None instance-attribute ¤
page = None instance-attribute ¤
send_count = 0 instance-attribute ¤
send_prompt(prompt: str = 'Hello, are you gpt-4o?') async ¤

Submit prompt text

Parameters:

  • prompt (str, default: 'Hello, are you gpt-4o?' ) –

    prompt text

get_last_ai_message() async ¤

Get the last AI message

get_all_chat_id() async ¤

Get all conversation ids

get_last_chat_id() async ¤

Get all conversation ids

ask(prompt: str = 'Hello, are you gpt-4o?') ¤

Ask and get reponse from web

Parameters:

  • prompt (str, default: 'Hello, are you gpt-4o?' ) –

    prompt text

Returns:

  • str

    response from AI

get_chat_history() ¤

alias of get_all_messages() method, but in sync mode

close() ¤

Alias of _close_page() method, but in sync mode

copilot_playwright ¤

Classes:

WebCopilot(cookie_file: str = None, proxy: str = None, chat_id: str = None, converstion_style: str = None) ¤

Bases: WebBase

Interacte with chatgpt web

proxy (str): proxy server. e.g., "http://something.com:8080"

Methods:

Attributes:

base_url = 'https://copilot.microsoft.com/' instance-attribute ¤
cookie_file = cookie_file instance-attribute ¤
prompt_textarea = self.page.get_by_role('textbox', name='Ask me anything...') instance-attribute ¤
send_button = self.page.get_by_role('button', name='Submit') instance-attribute ¤
stop_button = self.page.get_by_role('button', name='Stop Responding') instance-attribute ¤
upload_image_button = self.page.get_by_role('button', name='Add an image to search') instance-attribute ¤
upload_file_button = self.page.get_by_role('button', name='Add a file') instance-attribute ¤
browser_kwargs = {} instance-attribute ¤
context_kwargs = {} instance-attribute ¤
device: str = None instance-attribute ¤
page = None instance-attribute ¤
send_count = 0 instance-attribute ¤
send_prompt(prompt: str = 'Hello, are you gpt-4o?') async ¤

Submit prompt text

Parameters:

  • prompt (str, default: 'Hello, are you gpt-4o?' ) –

    prompt text

get_last_ai_message() async ¤

Get the last AI message

get_last_ai_reference() async ¤

Get the references in last AI message

get_all_messages() -> list[dict] async ¤

Get all messages by user and AI

ask(prompt: str = 'Hello, are you gpt-4o?') ¤

Ask and get reponse from web

Parameters:

  • prompt (str, default: 'Hello, are you gpt-4o?' ) –

    prompt text

Returns:

  • str

    response from AI

get_chat_history() ¤

alias of get_all_messages() method, but in sync mode

close() ¤

Alias of _close_page() method, but in sync mode

gemini ¤

Classes:

WebGemini(cookie_file: str = None, proxy: str = None, conversation_id: str = None) ¤

Bases: WebBase

Interacte with chatgpt web

proxy (str): proxy server. e.g., "http://something.com:8080"

Methods:

Attributes:

base_url = 'https://gemini.google.com/app' instance-attribute ¤
cookie_file = cookie_file instance-attribute ¤
prompt_textarea = self.page.get_by_role('textbox') instance-attribute ¤
browser_kwargs = {} instance-attribute ¤
context_kwargs = {} instance-attribute ¤
device: str = None instance-attribute ¤
page = None instance-attribute ¤
send_count = 0 instance-attribute ¤
send_prompt(prompt: str = 'Hello, are you gpt-4o?') async ¤

Submit prompt text

Parameters:

  • prompt (str, default: 'Hello, are you gpt-4o?' ) –

    prompt text

get_last_message() async ¤

Get the last AI message

get_all_messages() -> list[dict] async ¤

Get all messages by user and AI

up_load_file() async ¤

Get text

ask(prompt: str = 'Hello, are you gpt-4o?') ¤

Ask and get reponse from web

Parameters:

  • prompt (str, default: 'Hello, are you gpt-4o?' ) –

    prompt text

Returns:

  • str

    response from AI

get_chat_history() ¤

alias of get_all_messages() method, but in sync mode

close() ¤

Alias of _close_page() method, but in sync mode

llama ¤

Classes:

  • WebLlama

    Interacte with chatgpt web

WebLlama(cookie_file: str = None, proxy: str = None, chat_id: str = 'last') ¤

Bases: WebBase

Interacte with chatgpt web

proxy (str): proxy server. e.g., "http://something.com:8080"
chat_id (str): "id", "last", "new".
    If "id", use `chat_id`. If `chat_id` is not be found on the web, fallback to `chat_id="last"`.
    If "last", use last_id.
    If "new", start new chat.

Methods:

Attributes:

base_url = 'https://chatwithllama.com' instance-attribute ¤
cookie_file = cookie_file instance-attribute ¤
chat_id = chat_id instance-attribute ¤
prompt_textarea = self.page.get_by_placeholder('Ask anything!') instance-attribute ¤
send_button = self.page.get_by_role('button', name='Send question') instance-attribute ¤
stop_button = self.page.get_by_role('button', name='Stop generation') instance-attribute ¤
browser_kwargs = {} instance-attribute ¤
context_kwargs = {} instance-attribute ¤
device: str = None instance-attribute ¤
page = None instance-attribute ¤
send_count = 0 instance-attribute ¤
send_prompt(prompt: str = 'Hello, are you gpt-4o?') async ¤

Submit prompt text

Parameters:

  • prompt (str, default: 'Hello, are you gpt-4o?' ) –

    prompt text

get_last_ai_message() async ¤

Get the last AI message

get_all_chat_id() async ¤

Get all conversation ids

get_last_chat_id() async ¤

Get all conversation ids

ask(prompt: str = 'Hello, are you gpt-4o?') ¤

Ask and get reponse from web

Parameters:

  • prompt (str, default: 'Hello, are you gpt-4o?' ) –

    prompt text

Returns:

  • str

    response from AI

get_chat_history() ¤

alias of get_all_messages() method, but in sync mode

close() ¤

Alias of _close_page() method, but in sync mode

mistral ¤

Classes:

WebMistral(cookie_file: str = None, proxy: str = None, chat_id: str = 'last') ¤

Bases: WebBase

Interacte with chatgpt web

proxy (str): proxy server. e.g., "http://something.com:8080"
chat_id (str): "id", "last", "new".
    If "id", use `chat_id`. If `chat_id` is not be found on the web, fallback to `chat_id="last"`.
    If "last", use last_id.
    If "new", start new chat.

Methods:

Attributes:

base_url = 'https://chat.mistral.ai' instance-attribute ¤
cookie_file = cookie_file instance-attribute ¤
chat_id = chat_id instance-attribute ¤
prompt_textarea = self.page.get_by_placeholder('Ask anything!') instance-attribute ¤
send_button = self.page.get_by_role('button', name='Send question') instance-attribute ¤
stop_button = self.page.get_by_role('button', name='Stop generation') instance-attribute ¤
browser_kwargs = {} instance-attribute ¤
context_kwargs = {} instance-attribute ¤
device: str = None instance-attribute ¤
page = None instance-attribute ¤
send_count = 0 instance-attribute ¤
send_prompt(prompt: str = 'Hello, are you gpt-4o?') async ¤

Submit prompt text

Parameters:

  • prompt (str, default: 'Hello, are you gpt-4o?' ) –

    prompt text

get_last_ai_message() async ¤

Get the last AI message

get_all_chat_id() async ¤

Get all conversation ids

get_last_chat_id() async ¤

Get all conversation ids

ask(prompt: str = 'Hello, are you gpt-4o?') ¤

Ask and get reponse from web

Parameters:

  • prompt (str, default: 'Hello, are you gpt-4o?' ) –

    prompt text

Returns:

  • str

    response from AI

get_chat_history() ¤

alias of get_all_messages() method, but in sync mode

close() ¤

Alias of _close_page() method, but in sync mode

web_requests ¤

Modules:

bing_copilot ¤

Classes:

  • RWebCopilot

    Reverse-engineered Bing/Edge Copilot via Web browser.

Functions:

RWebCopilot(cookie_file: str = None, conversation_style: Literal['creative', 'balanced', 'precise'] = 'precise') ¤

Reverse-engineered Bing/Edge Copilot via Web browser.

conversation_style (str, optional): The conversation style. Available options: 'creative', 'balanced', 'precise'

Methods:

  • ask

    Ask the bot a question

Attributes:

cookie_file = cookie_file instance-attribute ¤
bot = asyncio.run(Chatbot.create(cookies=cookies)) instance-attribute ¤
conversation_style = style_map[conversation_style] instance-attribute ¤
ask(prompt: str, attachment: str = None, return_refs: bool = False) -> Union[str, dict[str, list]] ¤

Ask the bot a question

Parameters:

  • prompt (str) –

    The prompt to ask the bot

Returns:

response_parser(response: dict) -> dict[str, list[str]] ¤

Parse the response from the re_edge_gpt chatbot

Parameters:

  • response (dict) –

    response from the re_edge_gpt chatbot

Returns:

  • dict[str, list[str]]

    dict[str, list[str]]: final_text, references