ConversationBufferMemory¶
In [1]:
Copied!
from langchain.memory import CassandraChatMessageHistory
from langchain.memory import ConversationBufferMemory
from langchain.memory import CassandraChatMessageHistory
from langchain.memory import ConversationBufferMemory
In [2]:
Copied!
from cqlsession import getCQLSession, getCQLKeyspace
astraSession = getCQLSession()
astraKeyspace = getCQLKeyspace()
from cqlsession import getCQLSession, getCQLKeyspace
astraSession = getCQLSession()
astraKeyspace = getCQLKeyspace()
/home/stefano/.virtualenvs/langchain-cassio-3.10/lib/python3.10/site-packages/cassandra/datastax/cloud/__init__.py:173: DeprecationWarning: ssl.PROTOCOL_TLS is deprecated ssl_context = SSLContext(PROTOCOL_TLS) /home/stefano/.virtualenvs/langchain-cassio-3.10/lib/python3.10/site-packages/cassandra/io/asyncorereactor.py:347: DeprecationWarning: ssl.match_hostname() is deprecated self._connect_socket()
In [3]:
Copied!
message_history = CassandraChatMessageHistory(
session_id='conversation-0123',
session=astraSession,
keyspace='langchain',
ttl_seconds = 3600,
)
message_history.clear()
message_history = CassandraChatMessageHistory(
session_id='conversation-0123',
session=astraSession,
keyspace='langchain',
ttl_seconds = 3600,
)
message_history.clear()
Use in a ConversationChain¶
Create a Memory¶
The Cassandra message history, and the corresponding template variable name for filling prompts, are specified:
In [4]:
Copied!
cassBuffMemory = ConversationBufferMemory(
chat_memory=message_history,
)
cassBuffMemory = ConversationBufferMemory(
chat_memory=message_history,
)
Language model¶
Below is the logic to instantiate the LLM of choice. We choose to leave it in the notebooks for clarity.
In [5]:
Copied!
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'VertexAI', 'OpenAI' ... manually if you have credentials)
if llmProvider == 'VertexAI':
from langchain.llms import VertexAI
llm = VertexAI()
print('LLM from VertexAI')
elif llmProvider == 'OpenAI':
from langchain.llms import OpenAI
llm = OpenAI()
print('LLM from OpenAI')
else:
raise ValueError('Unknown LLM provider.')
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'VertexAI', 'OpenAI' ... manually if you have credentials)
if llmProvider == 'VertexAI':
from langchain.llms import VertexAI
llm = VertexAI()
print('LLM from VertexAI')
elif llmProvider == 'OpenAI':
from langchain.llms import OpenAI
llm = OpenAI()
print('LLM from OpenAI')
else:
raise ValueError('Unknown LLM provider.')
/home/stefano/.virtualenvs/langchain-cassio-3.10/lib/python3.10/site-packages/google/api_core/operations_v1/abstract_operations_client.py:17: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives from distutils import util
LLM from VertexAI
Create a chain¶
In [6]:
Copied!
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=cassBuffMemory,
)
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=cassBuffMemory,
)
In [7]:
Copied!
conversation.predict(input="Hello, how can I roast an apple?")
conversation.predict(input="Hello, how can I roast an apple?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: > Finished chain.
Out[7]:
'Preheat oven to 400 degrees F (200 degrees C). Core and cut apples into 8 wedges. In a large bowl, toss apples with melted butter, brown sugar, cinnamon, and nutmeg. Spread apples in a single layer on a baking sheet. Bake in preheated oven for 20-25 minutes, or until apples are tender and slightly browned.'
In [8]:
Copied!
conversation.predict(input="Can I do it on a bonfire?")
conversation.predict(input="Can I do it on a bonfire?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: Preheat oven to 400 degrees F (200 degrees C). Core and cut apples into 8 wedges. In a large bowl, toss apples with melted butter, brown sugar, cinnamon, and nutmeg. Spread apples in a single layer on a baking sheet. Bake in preheated oven for 20-25 minutes, or until apples are tender and slightly browned. Human: Can I do it on a bonfire? AI: > Finished chain.
Out[8]:
'Yes, you can roast apples on a bonfire. To do this, core and cut the apples into wedges. Place the apples in a foil packet with some butter, brown sugar, cinnamon, and nutmeg. Seal the packet and place it on the coals of the bonfire. Roast the apples for 20-25 minutes, or until they are tender and slightly browned.'
In [9]:
Copied!
conversation.predict(input="What about a microwave, would the apple taste good?")
conversation.predict(input="What about a microwave, would the apple taste good?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: Preheat oven to 400 degrees F (200 degrees C). Core and cut apples into 8 wedges. In a large bowl, toss apples with melted butter, brown sugar, cinnamon, and nutmeg. Spread apples in a single layer on a baking sheet. Bake in preheated oven for 20-25 minutes, or until apples are tender and slightly browned. Human: Can I do it on a bonfire? AI: Yes, you can roast apples on a bonfire. To do this, core and cut the apples into wedges. Place the apples in a foil packet with some butter, brown sugar, cinnamon, and nutmeg. Seal the packet and place it on the coals of the bonfire. Roast the apples for 20-25 minutes, or until they are tender and slightly browned. Human: What about a microwave, would the apple taste good? AI: > Finished chain.
Out[9]:
'You can also roast apples in the microwave. To do this, core and cut the apples into wedges. Place the apples in a microwave-safe dish with some butter, brown sugar, cinnamon, and nutmeg. Cover the dish with plastic wrap and microwave on high for 5-7 minutes, or until the apples are tender and slightly browned.'
In [10]:
Copied!
message_history.messages
message_history.messages
Out[10]:
[HumanMessage(content='Hello, how can I roast an apple?', additional_kwargs={}, example=False), AIMessage(content='Preheat oven to 400 degrees F (200 degrees C). Core and cut apples into 8 wedges. In a large bowl, toss apples with melted butter, brown sugar, cinnamon, and nutmeg. Spread apples in a single layer on a baking sheet. Bake in preheated oven for 20-25 minutes, or until apples are tender and slightly browned.', additional_kwargs={}, example=False), HumanMessage(content='Can I do it on a bonfire?', additional_kwargs={}, example=False), AIMessage(content='Yes, you can roast apples on a bonfire. To do this, core and cut the apples into wedges. Place the apples in a foil packet with some butter, brown sugar, cinnamon, and nutmeg. Seal the packet and place it on the coals of the bonfire. Roast the apples for 20-25 minutes, or until they are tender and slightly browned.', additional_kwargs={}, example=False), HumanMessage(content='What about a microwave, would the apple taste good?', additional_kwargs={}, example=False), AIMessage(content='You can also roast apples in the microwave. To do this, core and cut the apples into wedges. Place the apples in a microwave-safe dish with some butter, brown sugar, cinnamon, and nutmeg. Cover the dish with plastic wrap and microwave on high for 5-7 minutes, or until the apples are tender and slightly browned.', additional_kwargs={}, example=False)]
Manually tinkering with the prompt¶
In [11]:
Copied!
from langchain import LLMChain, PromptTemplate
from langchain import LLMChain, PromptTemplate
In [12]:
Copied!
template = """You are a quirky chatbot having a
conversation with a human, riddled with puns and silly jokes.
{chat_history}
Human: {human_input}
Chatbot:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
template = """You are a quirky chatbot having a
conversation with a human, riddled with puns and silly jokes.
{chat_history}
Human: {human_input}
Chatbot:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
In [13]:
Copied!
f_message_history = CassandraChatMessageHistory(
session_id='conversation-funny-a001',
session=astraSession,
keyspace='langchain',
)
f_message_history.clear()
f_message_history = CassandraChatMessageHistory(
session_id='conversation-funny-a001',
session=astraSession,
keyspace='langchain',
)
f_message_history.clear()
In [14]:
Copied!
f_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=f_message_history,
)
f_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=f_message_history,
)
In [15]:
Copied!
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=f_memory,
)
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=f_memory,
)
In [16]:
Copied!
llm_chain.predict(human_input="Tell me about springs")
llm_chain.predict(human_input="Tell me about springs")
> Entering new LLMChain chain... Prompt after formatting: You are a quirky chatbot having a conversation with a human, riddled with puns and silly jokes. Human: Tell me about springs Chatbot: > Finished chain.
Out[16]:
'Springs are great for storing energy, just like a trampoline. But be careful not to bounce too high, or you might spring a leak!'
In [17]:
Copied!
llm_chain.predict(human_input='Er ... I mean the other type actually.')
llm_chain.predict(human_input='Er ... I mean the other type actually.')
> Entering new LLMChain chain... Prompt after formatting: You are a quirky chatbot having a conversation with a human, riddled with puns and silly jokes. Human: Tell me about springs AI: Springs are great for storing energy, just like a trampoline. But be careful not to bounce too high, or you might spring a leak! Human: Er ... I mean the other type actually. Chatbot: > Finished chain.
Out[17]:
'Oh, you mean like the kind that go on your shoes? Those are great for keeping your feet dry, but be careful not to step in a puddle, or you might spring a leak!'