Hanah

Hanah

wow-agent-day05 Database Dialogue Agent

Reference:Datawhale wow agent day05

Local Installation of Ollama#

Reference:Datawhale wow rag class02
We adopt the fourth method: install Ollama locally, set up the model locally, and then use Ollama's service in Llama-index.

  1. Visit https://ollama.com and download the corresponding version.
  2. I want to move the model to drive D. Move C:\Users\XX\AppData\Local\Programs\Ollama and C:\Users\XX.ollama to drive D, placing them in the same folder. Then modify the environment variables: update the original C:\Users\XX\AppData\Local\Programs\Ollama path to the new location, create a new system variable named OLLAMA_MODELS, and set its value to the new location of the model files, for example, D:\Ollama\models. We hope that any computer on the local network can access it. This is also resolved by creating a new environment variable. Variable name: OLLAMA_HOST Variable value: 0.0.0.0:11434.
  3. Verify success. Open the command line window and enter ollama. If information like Usage: Available Commands: appears, it indicates that the installation was successful.
  4. Install qwen2:7b. Run ollama run qwen2:7b. If success appears, it indicates that the installation was successful. Enter 127.0.0.1:11434 in the browser, and if Ollama is running appears, it indicates that the port is functioning normally.

Configure Ollama in Jupyter Notebook#

%pip install llama-index-core
%pip install llama-index-embeddings-ollama
%pip install llama-index-llms-ollama
%pip install llama-index-readers-file
%pip install llama-index-vector-stores-faiss
%pip install llamaindex-py-client

Here, compared to the tutorial, the base_url has been modified.

# Configure dialogue model
from llama_index.llms.ollama import Ollama
llm = Ollama(base_url="http://127.0.0.1:11434", model="qwen2:7b")

# Configure Embedding model
from llama_index.embeddings.ollama import OllamaEmbedding
embedding = OllamaEmbedding(base_url="http://127.0.0.1:11434", model_name="qwen2:7b")

# Test dialogue model
response = llm.complete("Who are you?")
print(response)

# Test embedding model
emb = embedding.get_text_embedding("Hello there")
len(emb), type(emb)

Output:
I am a large-scale language model developed by Alibaba Cloud, named Tongyi Qianwen. As an AI assistant, my goal is to help users obtain accurate and useful information, answer various questions, and provide ideas or steps for problem-solving. I can answer questions in various fields, including but not limited to science, technology, education, culture, etc., and I can communicate in multiple languages. Please feel free to let me know what you need to understand, and I will do my best to assist you!
(3584, list)

Create Database#

import sqlite3
# Create database
sqllite_path = 'llmdb.db'
con = sqlite3.connect(sqllite_path)

# Create table
sql = """
CREATE TABLE `section_stats` (
  `Department` varchar(100) DEFAULT NULL,
  `Number of People` int(11) DEFAULT NULL
);
"""
c = con.cursor()
cursor = c.execute(sql)
c.close()
con.close()

The main function of this code snippet is to initialize an SQLite database and create a table within it to store department and number of people information.
Fill in some data:

con = sqlite3.connect(sqllite_path)
c = con.cursor()
data = [
    ["Patent Department", 22],
    ["Trademark Department", 25],
]
for item in data:
    sql = """
    INSERT INTO section_stats (Department, Number of People) 
    values('%s','%d')
    """ % (item[0], item[1])
    c.execute(sql)
    con.commit()
c.close()
con.close()

Configure Dialogue Model and Embedding Model#

from llama_index.core.agent import ReActAgent  
from llama_index.core.tools import FunctionTool  
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex, Settings  
from llama_index.core.tools import QueryEngineTool   
from llama_index.core import SQLDatabase  
from llama_index.core.query_engine import NLSQLTableQueryEngine  
from sqlalchemy import create_engine, select  

# Configure default large model  
Settings.llm = llm
Settings.embed_model = embedding

## Create database query engine  
engine = create_engine("sqlite:///llmdb.db")  
# prepare data  
sql_database = SQLDatabase(engine, include_tables=["section_stats"])  
query_engine = NLSQLTableQueryEngine(  
    sql_database=sql_database,   
    tables=["section_stats"],   
    llm=Settings.llm  
)

Code explanation:

  1. create_engine is a function provided by SQLAlchemy to create a database engine.
  2. SQLDatabase is a class used to encapsulate the database connection. engine is the database engine we created in the previous step. include_tables=["section_stats"] specifies the table we are interested in, which is the section_stats table.
  3. NLSQLTableQueryEngine is a class used to convert natural language queries into SQL queries. sql_database=sql_database passes the SQLDatabase object we created earlier to the query engine. tables=["section_stats"] specifies the table to query. llm=Settings.llm specifies the language model (LLM) used to convert natural language into SQL queries.
    The purpose of this code is to set up a system that can interact with the SQLite database through natural language queries. In this way, users can query data in the database using natural language without directly writing SQL statements.

Create Tools#

# Create tool functions  
def multiply(a: float, b: float) -> float:  
    """Multiply two numbers and return the product."""  
    return a * b  

multiply_tool = FunctionTool.from_defaults(fn=multiply)  

def add(a: float, b: float) -> float:  
    """Add two numbers and return their sum."""  
    return a + b

add_tool = FunctionTool.from_defaults(fn=add)

# Wrap the database query engine into a tool function object  
staff_tool = QueryEngineTool.from_defaults(
    query_engine,
    name="section_staff",
    description="Query the number of people in the department."  
)

Test#

# Build ReActAgent, can add many functions, here only added the addition function and department staff query function.
agent = ReActAgent.from_tools([add_tool, staff_tool], verbose=True)  
# Give instructions through the agent
response = agent.chat("Please get the number of people in the `Patent Department` and `Trademark Department` from the database table, and add the number of people in these two departments!")  

Output:

Running step 98f2e8d3-b1e4-4d1d-899d-494cbfb7d92d. Step input: Please get the number of people in the Patent Department and Trademark Department from the database table, and add the number of people in these two departments!
Thought: I need to use the section_staff tool to query the number of people in the department and calculate the total number of people in the two departments.
Action: section_staff
Action Input: {'input': 'Patent Department'}
Observation: In the provided data, the total number of people in the "Patent Department" is 22.
Running step 075f2870-1484-4882-81df-220dfacebc59. Step input: None
Thought: I still need to get the number of people in the "Trademark Department".
Action: section_staff
Action Input: {'input': 'Trademark Department'}
Observation: The SQL query result indicates that there are 25 entries related to the 'Trademark Department'. This might mean there are 25 specific tasks, documents, or activities associated with this department.
Running step 89a9b7df-6b2b-4baa-8054-047aefc7f48f. Step input: None
Thought: I need to add the number of people in the two departments.
Action: add
Action Input: {'a': 22, 'b': 25}
Observation: 47
Running step 467a2f45-3cc8-455f-8136-c5f5cb85a34d. Step input: None
Thought: I can answer without using any more tools.
Answer: The total number of people in the Patent Department and Trademark Department is 47.
The total number of people in the Patent Department and Trademark Department is 47.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.