Coursera

# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

Chain of Thought & ReAct

Google Colaboratory logo
Run in Colab
GitHub logo
View on GitHub
Vertex AI logo
Open in Vertex AI Workbench

Introduction

This notebook demonstrates advanced prompting techniques such as chain of thought reasoning, and building ReAct agents using LangChain and Vertex AI. You will start with exploring chain of thought, and how it can be used to improve the performance of language models. Then you will learn how to build ReAct agents using LangChain.

Author(s) Chris Hanna

Setup

!pip install --user langchain==0.0.310 \
                    google-cloud-aiplatform==1.35.0 \
                    prettyprinter==0.18.0 \
                    wikipedia==1.4.0 \
                    chromadb==0.3.26 \
                    tiktoken==0.5.1 \
                    tabulate==0.9.0 \
                    sqlalchemy-bigquery==1.8.0 \
                    google-cloud-bigquery==3.11.4
Collecting langchain==0.0.310
  Downloading langchain-0.0.310-py3-none-any.whl.metadata (15 kB)
Collecting google-cloud-aiplatform==1.35.0
  Using cached google_cloud_aiplatform-1.35.0-py2.py3-none-any.whl.metadata (27 kB)
Collecting prettyprinter==0.18.0
  Downloading prettyprinter-0.18.0-py2.py3-none-any.whl.metadata (12 kB)
Collecting wikipedia==1.4.0
  Downloading wikipedia-1.4.0.tar.gz (27 kB)
  Preparing metadata (setup.py) ... [?25ldone
[?25hCollecting chromadb==0.3.26
  Downloading chromadb-0.3.26-py3-none-any.whl.metadata (6.8 kB)
Collecting tiktoken==0.5.1
  Downloading tiktoken-0.5.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.6 kB)
Requirement already satisfied: tabulate==0.9.0 in /opt/conda/lib/python3.10/site-packages (0.9.0)
Collecting sqlalchemy-bigquery==1.8.0
  Downloading sqlalchemy_bigquery-1.8.0-py2.py3-none-any.whl.metadata (13 kB)
Collecting google-cloud-bigquery==3.11.4
  Downloading google_cloud_bigquery-3.11.4-py2.py3-none-any.whl.metadata (8.5 kB)
Requirement already satisfied: PyYAML>=5.3 in /opt/conda/lib/python3.10/site-packages (from langchain==0.0.310) (6.0.1)
Requirement already satisfied: SQLAlchemy<3,>=1.4 in /opt/conda/lib/python3.10/site-packages (from langchain==0.0.310) (2.0.28)
Requirement already satisfied: aiohttp<4.0.0,>=3.8.3 in /opt/conda/lib/python3.10/site-packages (from langchain==0.0.310) (3.9.3)
Collecting anyio<4.0 (from langchain==0.0.310)
  Downloading anyio-3.7.1-py3-none-any.whl.metadata (4.7 kB)
Requirement already satisfied: async-timeout<5.0.0,>=4.0.0 in /opt/conda/lib/python3.10/site-packages (from langchain==0.0.310) (4.0.3)
Collecting dataclasses-json<0.7,>=0.5.7 (from langchain==0.0.310)
  Downloading dataclasses_json-0.6.4-py3-none-any.whl.metadata (25 kB)
Requirement already satisfied: jsonpatch<2.0,>=1.33 in /opt/conda/lib/python3.10/site-packages (from langchain==0.0.310) (1.33)
Collecting langsmith<0.1.0,>=0.0.40 (from langchain==0.0.310)
  Downloading langsmith-0.0.92-py3-none-any.whl.metadata (9.9 kB)
Requirement already satisfied: numpy<2,>=1 in /opt/conda/lib/python3.10/site-packages (from langchain==0.0.310) (1.24.4)
Requirement already satisfied: pydantic<3,>=1 in /opt/conda/lib/python3.10/site-packages (from langchain==0.0.310) (1.10.14)
Requirement already satisfied: requests<3,>=2 in /opt/conda/lib/python3.10/site-packages (from langchain==0.0.310) (2.31.0)
Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in /opt/conda/lib/python3.10/site-packages (from langchain==0.0.310) (8.2.3)
Requirement already satisfied: google-api-core!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0 in /home/jupyter/.local/lib/python3.10/site-packages (from google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform==1.35.0) (2.8.2)
Requirement already satisfied: proto-plus<2.0.0dev,>=1.22.0 in /opt/conda/lib/python3.10/site-packages (from google-cloud-aiplatform==1.35.0) (1.23.0)
Requirement already satisfied: protobuf!=3.20.0,!=3.20.1,!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.19.5 in /opt/conda/lib/python3.10/site-packages (from google-cloud-aiplatform==1.35.0) (3.20.3)
Requirement already satisfied: packaging>=14.3 in /home/jupyter/.local/lib/python3.10/site-packages (from google-cloud-aiplatform==1.35.0) (21.3)
Requirement already satisfied: google-cloud-storage<3.0.0dev,>=1.32.0 in /opt/conda/lib/python3.10/site-packages (from google-cloud-aiplatform==1.35.0) (2.14.0)
Requirement already satisfied: google-cloud-resource-manager<3.0.0dev,>=1.3.3 in /home/jupyter/.local/lib/python3.10/site-packages (from google-cloud-aiplatform==1.35.0) (1.6.1)
Requirement already satisfied: shapely<3.0.0dev in /opt/conda/lib/python3.10/site-packages (from google-cloud-aiplatform==1.35.0) (1.8.5.post1)
Requirement already satisfied: Pygments>=2.2.0 in /opt/conda/lib/python3.10/site-packages (from prettyprinter==0.18.0) (2.17.2)
Requirement already satisfied: colorful>=0.4.0 in /opt/conda/lib/python3.10/site-packages (from prettyprinter==0.18.0) (0.5.6)
Requirement already satisfied: beautifulsoup4 in /opt/conda/lib/python3.10/site-packages (from wikipedia==1.4.0) (4.12.3)
Requirement already satisfied: pandas>=1.3 in /opt/conda/lib/python3.10/site-packages (from chromadb==0.3.26) (2.0.3)
Collecting hnswlib>=0.7 (from chromadb==0.3.26)
  Downloading hnswlib-0.8.0.tar.gz (36 kB)
  Installing build dependencies ... [?25ldone
[?25h  Getting requirements to build wheel ... [?25ldone
[?25h  Preparing metadata (pyproject.toml) ... [?25ldone
[?25hCollecting clickhouse-connect>=0.5.7 (from chromadb==0.3.26)
  Downloading clickhouse_connect-0.7.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.8 kB)
Collecting duckdb>=0.7.1 (from chromadb==0.3.26)
  Downloading duckdb-0.10.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (763 bytes)
Requirement already satisfied: fastapi>=0.85.1 in /opt/conda/lib/python3.10/site-packages (from chromadb==0.3.26) (0.108.0)
Requirement already satisfied: uvicorn>=0.18.3 in /opt/conda/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb==0.3.26) (0.27.1)
Collecting posthog>=2.4.0 (from chromadb==0.3.26)
  Downloading posthog-3.5.0-py2.py3-none-any.whl.metadata (2.0 kB)
Requirement already satisfied: typing-extensions>=4.5.0 in /opt/conda/lib/python3.10/site-packages (from chromadb==0.3.26) (4.10.0)
Collecting pulsar-client>=3.1.0 (from chromadb==0.3.26)
  Downloading pulsar_client-3.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (1.0 kB)
Collecting onnxruntime>=1.14.1 (from chromadb==0.3.26)
  Downloading onnxruntime-1.17.1-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (4.3 kB)
Collecting tokenizers>=0.13.2 (from chromadb==0.3.26)
  Downloading tokenizers-0.15.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.7 kB)
Requirement already satisfied: tqdm>=4.65.0 in /opt/conda/lib/python3.10/site-packages (from chromadb==0.3.26) (4.66.2)
Requirement already satisfied: overrides>=7.3.1 in /opt/conda/lib/python3.10/site-packages (from chromadb==0.3.26) (7.7.0)
Requirement already satisfied: regex>=2022.1.18 in /opt/conda/lib/python3.10/site-packages (from tiktoken==0.5.1) (2023.12.25)
Requirement already satisfied: google-auth<3.0.0dev,>=1.25.0 in /opt/conda/lib/python3.10/site-packages (from sqlalchemy-bigquery==1.8.0) (2.28.1)
Collecting SQLAlchemy<3,>=1.4 (from langchain==0.0.310)
  Downloading SQLAlchemy-1.4.52-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (10 kB)
Requirement already satisfied: grpcio<2.0dev,>=1.47.0 in /opt/conda/lib/python3.10/site-packages (from google-cloud-bigquery==3.11.4) (1.48.1)
Requirement already satisfied: google-cloud-core<3.0.0dev,>=1.6.0 in /opt/conda/lib/python3.10/site-packages (from google-cloud-bigquery==3.11.4) (2.4.1)
Requirement already satisfied: google-resumable-media<3.0dev,>=0.6.0 in /opt/conda/lib/python3.10/site-packages (from google-cloud-bigquery==3.11.4) (2.7.0)
Requirement already satisfied: python-dateutil<3.0dev,>=2.7.2 in /opt/conda/lib/python3.10/site-packages (from google-cloud-bigquery==3.11.4) (2.9.0)
Requirement already satisfied: aiosignal>=1.1.2 in /opt/conda/lib/python3.10/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain==0.0.310) (1.3.1)
Requirement already satisfied: attrs>=17.3.0 in /opt/conda/lib/python3.10/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain==0.0.310) (23.2.0)
Requirement already satisfied: frozenlist>=1.1.1 in /opt/conda/lib/python3.10/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain==0.0.310) (1.4.1)
Requirement already satisfied: multidict<7.0,>=4.5 in /opt/conda/lib/python3.10/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain==0.0.310) (6.0.5)
Requirement already satisfied: yarl<2.0,>=1.0 in /opt/conda/lib/python3.10/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain==0.0.310) (1.9.4)
Requirement already satisfied: idna>=2.8 in /opt/conda/lib/python3.10/site-packages (from anyio<4.0->langchain==0.0.310) (3.6)
Requirement already satisfied: sniffio>=1.1 in /opt/conda/lib/python3.10/site-packages (from anyio<4.0->langchain==0.0.310) (1.3.1)
Requirement already satisfied: exceptiongroup in /opt/conda/lib/python3.10/site-packages (from anyio<4.0->langchain==0.0.310) (1.2.0)
Requirement already satisfied: certifi in /opt/conda/lib/python3.10/site-packages (from clickhouse-connect>=0.5.7->chromadb==0.3.26) (2024.2.2)
Requirement already satisfied: urllib3>=1.26 in /opt/conda/lib/python3.10/site-packages (from clickhouse-connect>=0.5.7->chromadb==0.3.26) (1.26.18)
Requirement already satisfied: pytz in /opt/conda/lib/python3.10/site-packages (from clickhouse-connect>=0.5.7->chromadb==0.3.26) (2024.1)
Requirement already satisfied: zstandard in /opt/conda/lib/python3.10/site-packages (from clickhouse-connect>=0.5.7->chromadb==0.3.26) (0.22.0)
Requirement already satisfied: lz4 in /opt/conda/lib/python3.10/site-packages (from clickhouse-connect>=0.5.7->chromadb==0.3.26) (4.3.3)
Collecting marshmallow<4.0.0,>=3.18.0 (from dataclasses-json<0.7,>=0.5.7->langchain==0.0.310)
  Downloading marshmallow-3.21.1-py3-none-any.whl.metadata (7.2 kB)
Collecting typing-inspect<1,>=0.4.0 (from dataclasses-json<0.7,>=0.5.7->langchain==0.0.310)
  Downloading typing_inspect-0.9.0-py3-none-any.whl.metadata (1.5 kB)
Requirement already satisfied: starlette<0.33.0,>=0.29.0 in /opt/conda/lib/python3.10/site-packages (from fastapi>=0.85.1->chromadb==0.3.26) (0.32.0.post1)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.56.2 in /home/jupyter/.local/lib/python3.10/site-packages (from google-api-core!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform==1.35.0) (1.56.4)
Requirement already satisfied: grpcio-status<2.0dev,>=1.33.2 in /opt/conda/lib/python3.10/site-packages (from google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform==1.35.0) (1.48.1)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /opt/conda/lib/python3.10/site-packages (from google-auth<3.0.0dev,>=1.25.0->sqlalchemy-bigquery==1.8.0) (4.2.4)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.10/site-packages (from google-auth<3.0.0dev,>=1.25.0->sqlalchemy-bigquery==1.8.0) (0.3.0)
Requirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.10/site-packages (from google-auth<3.0.0dev,>=1.25.0->sqlalchemy-bigquery==1.8.0) (4.9)
Requirement already satisfied: grpc-google-iam-v1<1.0.0dev,>=0.12.4 in /home/jupyter/.local/lib/python3.10/site-packages (from google-cloud-resource-manager<3.0.0dev,>=1.3.3->google-cloud-aiplatform==1.35.0) (0.12.4)
Requirement already satisfied: google-crc32c<2.0dev,>=1.0 in /opt/conda/lib/python3.10/site-packages (from google-cloud-storage<3.0.0dev,>=1.32.0->google-cloud-aiplatform==1.35.0) (1.5.0)
Requirement already satisfied: six>=1.5.2 in /opt/conda/lib/python3.10/site-packages (from grpcio<2.0dev,>=1.47.0->google-cloud-bigquery==3.11.4) (1.16.0)
Requirement already satisfied: jsonpointer>=1.9 in /opt/conda/lib/python3.10/site-packages (from jsonpatch<2.0,>=1.33->langchain==0.0.310) (2.4)
Collecting coloredlogs (from onnxruntime>=1.14.1->chromadb==0.3.26)
  Downloading coloredlogs-15.0.1-py2.py3-none-any.whl.metadata (12 kB)
Requirement already satisfied: flatbuffers in /opt/conda/lib/python3.10/site-packages (from onnxruntime>=1.14.1->chromadb==0.3.26) (23.5.26)
Collecting sympy (from onnxruntime>=1.14.1->chromadb==0.3.26)
  Downloading sympy-1.12-py3-none-any.whl.metadata (12 kB)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /opt/conda/lib/python3.10/site-packages (from packaging>=14.3->google-cloud-aiplatform==1.35.0) (3.1.2)
Requirement already satisfied: tzdata>=2022.1 in /opt/conda/lib/python3.10/site-packages (from pandas>=1.3->chromadb==0.3.26) (2024.1)
Collecting monotonic>=1.5 (from posthog>=2.4.0->chromadb==0.3.26)
  Downloading monotonic-1.6-py2.py3-none-any.whl.metadata (1.5 kB)
Collecting backoff>=1.10.0 (from posthog>=2.4.0->chromadb==0.3.26)
  Downloading backoff-2.2.1-py3-none-any.whl.metadata (14 kB)
Requirement already satisfied: charset-normalizer<4,>=2 in /opt/conda/lib/python3.10/site-packages (from requests<3,>=2->langchain==0.0.310) (3.3.2)
Requirement already satisfied: greenlet!=0.4.17 in /opt/conda/lib/python3.10/site-packages (from SQLAlchemy<3,>=1.4->langchain==0.0.310) (3.0.3)
Collecting huggingface_hub<1.0,>=0.16.4 (from tokenizers>=0.13.2->chromadb==0.3.26)
  Downloading huggingface_hub-0.21.4-py3-none-any.whl.metadata (13 kB)
Requirement already satisfied: click>=7.0 in /opt/conda/lib/python3.10/site-packages (from uvicorn>=0.18.3->uvicorn[standard]>=0.18.3->chromadb==0.3.26) (8.1.7)
Requirement already satisfied: h11>=0.8 in /opt/conda/lib/python3.10/site-packages (from uvicorn>=0.18.3->uvicorn[standard]>=0.18.3->chromadb==0.3.26) (0.14.0)
Requirement already satisfied: httptools>=0.5.0 in /opt/conda/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb==0.3.26) (0.6.1)
Requirement already satisfied: python-dotenv>=0.13 in /opt/conda/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb==0.3.26) (1.0.1)
Requirement already satisfied: uvloop!=0.15.0,!=0.15.1,>=0.14.0 in /opt/conda/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb==0.3.26) (0.19.0)
Requirement already satisfied: watchfiles>=0.13 in /opt/conda/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb==0.3.26) (0.21.0)
Requirement already satisfied: websockets>=10.4 in /opt/conda/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb==0.3.26) (12.0)
Requirement already satisfied: soupsieve>1.2 in /opt/conda/lib/python3.10/site-packages (from beautifulsoup4->wikipedia==1.4.0) (2.5)
Requirement already satisfied: filelock in /opt/conda/lib/python3.10/site-packages (from huggingface_hub<1.0,>=0.16.4->tokenizers>=0.13.2->chromadb==0.3.26) (3.13.1)
Requirement already satisfied: fsspec>=2023.5.0 in /opt/conda/lib/python3.10/site-packages (from huggingface_hub<1.0,>=0.16.4->tokenizers>=0.13.2->chromadb==0.3.26) (2024.2.0)
Requirement already satisfied: pyasn1<0.6.0,>=0.4.6 in /opt/conda/lib/python3.10/site-packages (from pyasn1-modules>=0.2.1->google-auth<3.0.0dev,>=1.25.0->sqlalchemy-bigquery==1.8.0) (0.5.1)
Collecting mypy-extensions>=0.3.0 (from typing-inspect<1,>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain==0.0.310)
  Downloading mypy_extensions-1.0.0-py3-none-any.whl.metadata (1.1 kB)
Collecting humanfriendly>=9.1 (from coloredlogs->onnxruntime>=1.14.1->chromadb==0.3.26)
  Downloading humanfriendly-10.0-py2.py3-none-any.whl.metadata (9.2 kB)
Collecting mpmath>=0.19 (from sympy->onnxruntime>=1.14.1->chromadb==0.3.26)
  Downloading mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)
Downloading langchain-0.0.310-py3-none-any.whl (1.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 71.0 MB/s eta 0:00:00
[?25hDownloading google_cloud_aiplatform-1.35.0-py2.py3-none-any.whl (3.1 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.1/3.1 MB 95.9 MB/s eta 0:00:00
[?25hDownloading prettyprinter-0.18.0-py2.py3-none-any.whl (48 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 48.0/48.0 kB 5.4 MB/s eta 0:00:00
[?25hDownloading chromadb-0.3.26-py3-none-any.whl (123 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 123.6/123.6 kB 6.5 MB/s eta 0:00:00
[?25hDownloading tiktoken-0.5.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.0 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 72.1 MB/s eta 0:00:00
[?25hDownloading sqlalchemy_bigquery-1.8.0-py2.py3-none-any.whl (33 kB)
Downloading google_cloud_bigquery-3.11.4-py2.py3-none-any.whl (219 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 219.6/219.6 kB 21.7 MB/s eta 0:00:00
[?25hDownloading anyio-3.7.1-py3-none-any.whl (80 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 80.9/80.9 kB 8.8 MB/s eta 0:00:00
[?25hDownloading clickhouse_connect-0.7.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (964 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 964.7/964.7 kB 45.6 MB/s eta 0:00:00
[?25hDownloading dataclasses_json-0.6.4-py3-none-any.whl (28 kB)
Downloading duckdb-0.10.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.1 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.1/18.1 MB 86.1 MB/s eta 0:00:00:00:0100:01
[?25hDownloading langsmith-0.0.92-py3-none-any.whl (56 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.5/56.5 kB 6.0 MB/s eta 0:00:00
[?25hDownloading onnxruntime-1.17.1-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (6.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.8/6.8 MB 104.1 MB/s eta 0:00:00a 0:00:01
[?25hDownloading posthog-3.5.0-py2.py3-none-any.whl (41 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 41.3/41.3 kB 4.1 MB/s eta 0:00:00
[?25hDownloading pulsar_client-3.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.4 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.4/5.4 MB 100.4 MB/s eta 0:00:0000:01
[?25hDownloading SQLAlchemy-1.4.52-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 57.2 MB/s eta 0:00:00
[?25hDownloading tokenizers-0.15.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.6/3.6 MB 61.9 MB/s eta 0:00:00:00:01
[?25hDownloading backoff-2.2.1-py3-none-any.whl (15 kB)
Downloading huggingface_hub-0.21.4-py3-none-any.whl (346 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 346.4/346.4 kB 15.4 MB/s eta 0:00:00
[?25hDownloading marshmallow-3.21.1-py3-none-any.whl (49 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 49.4/49.4 kB 884.1 kB/s eta 0:00:00a 0:00:01
[?25hDownloading monotonic-1.6-py2.py3-none-any.whl (8.2 kB)
Downloading typing_inspect-0.9.0-py3-none-any.whl (8.8 kB)
Downloading coloredlogs-15.0.1-py2.py3-none-any.whl (46 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.0/46.0 kB 4.5 MB/s eta 0:00:00
[?25hDownloading sympy-1.12-py3-none-any.whl (5.7 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.7/5.7 MB 69.2 MB/s eta 0:00:00:00:0100:01
[?25hDownloading humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.8/86.8 kB 9.8 MB/s eta 0:00:00
[?25hDownloading mpmath-1.3.0-py3-none-any.whl (536 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 kB 36.1 MB/s eta 0:00:00
[?25hDownloading mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)
Building wheels for collected packages: wikipedia, hnswlib
  Building wheel for wikipedia (setup.py) ... [?25ldone
[?25h  Created wheel for wikipedia: filename=wikipedia-1.4.0-py3-none-any.whl size=11678 sha256=569bde922acbcce41cbac6884661a64680fc1d80a040f15def5305589adec670
  Stored in directory: /home/jupyter/.cache/pip/wheels/5e/b6/c5/93f3dec388ae76edc830cb42901bb0232504dfc0df02fc50de
  Building wheel for hnswlib (pyproject.toml) ... [?25ldone
[?25h  Created wheel for hnswlib: filename=hnswlib-0.8.0-cp310-cp310-linux_x86_64.whl size=197540 sha256=77b6f95d2aac9a03efdd95a1b791e47f679a04a5dfd168989d7d94d132debf82
  Stored in directory: /home/jupyter/.cache/pip/wheels/af/a9/3e/3e5d59ee41664eb31a4e6de67d1846f86d16d93c45f277c4e7
Successfully built wikipedia hnswlib
Installing collected packages: mpmath, monotonic, sympy, SQLAlchemy, pulsar-client, prettyprinter, mypy-extensions, humanfriendly, hnswlib, duckdb, clickhouse-connect, backoff, anyio, wikipedia, typing-inspect, tiktoken, posthog, marshmallow, langsmith, huggingface_hub, coloredlogs, tokenizers, onnxruntime, dataclasses-json, langchain, chromadb, google-cloud-bigquery, sqlalchemy-bigquery, google-cloud-aiplatform
  WARNING: The script isympy is installed in '/home/jupyter/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The script humanfriendly is installed in '/home/jupyter/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The script langsmith is installed in '/home/jupyter/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The script huggingface-cli is installed in '/home/jupyter/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The script coloredlogs is installed in '/home/jupyter/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The script onnxruntime_test is installed in '/home/jupyter/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The scripts langchain and langchain-server are installed in '/home/jupyter/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  Attempting uninstall: google-cloud-bigquery
    Found existing installation: google-cloud-bigquery 2.34.4
    Uninstalling google-cloud-bigquery-2.34.4:
      Successfully uninstalled google-cloud-bigquery-2.34.4
  WARNING: The script tb-gcp-uploader is installed in '/home/jupyter/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
ipython-sql 0.5.0 requires sqlalchemy>=2.0, but you have sqlalchemy 1.4.52 which is incompatible.
Successfully installed SQLAlchemy-1.4.52 anyio-3.7.1 backoff-2.2.1 chromadb-0.3.26 clickhouse-connect-0.7.3 coloredlogs-15.0.1 dataclasses-json-0.6.4 duckdb-0.10.1 google-cloud-aiplatform-1.35.0 google-cloud-bigquery-3.11.4 hnswlib-0.8.0 huggingface_hub-0.21.4 humanfriendly-10.0 langchain-0.0.310 langsmith-0.0.92 marshmallow-3.21.1 monotonic-1.6 mpmath-1.3.0 mypy-extensions-1.0.0 onnxruntime-1.17.1 posthog-3.5.0 prettyprinter-0.18.0 pulsar-client-3.4.0 sqlalchemy-bigquery-1.8.0 sympy-1.12 tiktoken-0.5.1 tokenizers-0.15.2 typing-inspect-0.9.0 wikipedia-1.4.0

Restart current runtime

To use the newly installed packages in this Jupyter runtime, you must restart the runtime. You can do this by running the cell below, which will restart the current kernel.

For Vertex Al Workbench you can also restart the terminal using the button on top

# Restart kernel after installs so that your environment can access the new packages
import IPython
import time

app = IPython.Application.instance()
app.kernel.do_shutdown(True)
{'status': 'ok', 'restart': True}
⚠️ The kernel is going to restart. Please wait until it is finished before continuing to the next step. ⚠️

Authenticate your notebook environment (Colab only)

If you are using Colab, run the cell below to authenticate:

import sys

if "google.colab" in sys.modules:
    from google.colab import auth

    auth.authenticate_user()
    print("Authenticated")

Import packages

import vertexai
import os
import IPython
from langchain.llms import VertexAI
from IPython.display import display, Markdown
2024-03-22 06:40:29.430121: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-03-22 06:40:29.482212: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-03-22 06:40:29.482254: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-03-22 06:40:29.483777: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-03-22 06:40:29.492034: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-03-22 06:40:29.493447: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-03-22 06:40:30.808124: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
PROJECT_ID = "qwiklabs-gcp-04-91900298e456"  # @param {type:"string"}
LOCATION = "us-central1"  # @param {type:"string"}
MODEL_NAME = "text-bison@001"  # @param {type:"string"}
vertexai.init(project=PROJECT_ID, location=LOCATION)
llm = VertexAI(model_name=MODEL_NAME, max_output_tokens=1000)
llm.predict(
    "Improve this description : In this notebook we'll explore advanced prompting techniques, and building ReAct agents using LangChain and Vertex AI "
)
'This notebook will explore advanced prompting techniques, and building ReAct agents using LangChain and Vertex AI.\n\nWe will start by introducing the concept of prompting, and how it can be used to improve the performance of language models. We will then discuss some of the advanced prompting techniques that are available, and how they can be used to achieve specific goals. Finally, we will show how to build ReAct agents using LangChain and Vertex AI.\n\nReAct agents are a type of language model that is designed to be used in interactive dialogue systems. They are able to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.\n\nLangChain is a library that makes it easy to build and train ReAct agents. It provides a number of features that make it easy to develop and deploy these agents, including a variety of pre-trained language models, a simple API, and support for a variety of tasks.\n\nVertex AI is a managed machine learning platform that makes it easy to build, train, and deploy machine learning models. It provides a number of features that make it easy to use LangChain to build ReAct agents, including a variety of pre-trained language models, a simple API, and support for a variety of tasks.\n\nBy the end of this notebook, you will have a good understanding of advanced prompting techniques, and how to use them to build ReAct agents using LangChain and Vertex AI.'

Chain of Thought - Introduction


The technique introduced in this paper is a novel approach to enhance the reasoning capabilities of large language models (LLMs), especially in multi-step reasoning tasks.

In contrast to the standard prompting, where models are asked to directly produce the final answer, ‘Chain of Thought Prompting’ encourages LLMs to generate intermediate reasoning steps before providing the final answer to a problem. The advantage of this technique lies in its ability to break down complex problems into manageable, intermediate steps. By doing this, the model-generated ‘chain of thought’ can mimic an intuitive human thought process when working through multi-step problems.

image.png

Chain of Thought - Use Cases

image.png

Chain of Thought - Sample

question = """Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls.
Each can has 3 tennis balls. How many tennis balls does he have now?
A: The answer is 11.
Q: The cafeteria had 23 apples.
If they used 20 to make lunch and bought 6 more, how many apples do they have?
A:"""

llm.predict(question)
'The answer is 19.'

Rewriting the prompt to include a chain of thought shows the LLM how to decompose the question into multiple simple steps of reasoning.

The model response then follows a similar chain of thought, increasing the likelihood of a correct answer.

question = """Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls.
Each can has 3 tennis balls. How many tennis balls does he have now?
A: Roger started with 5 balls. 2 cans of 3 tennis balls
each is 6 tennis balls. 5 + 6 = 11. The answer is 11.
Q: The cafeteria had 23 apples.
If they used 20 to make lunch and bought 6 more, how many apples do they have?
A:"""

llm.predict(question)
'The cafeteria started with 23 apples. They used 20 apples to make lunch, so they have 23 - 20 = 3 apples left. They bought 6 more apples, so they now have 3 + 6 = 9 apples. The answer is 9.'

Notice the chain of thought includes both text describing the steps to follow and intermediate outputs/conclusions from each reasoning step.

Chain of Thought - Zero Shot

Zero-shot CoT prompting is a technique that allows large language models (LLMs) to generate more accurate answers to questions. It does this by appending the words “Let’s think step by step.” to the end of a question. This simple prompt helps the LLM to generate a chain of thought that answers the question, from which the LLM can then extract a more accurate answer.

image.png

question = """Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls.
Each can has 3 tennis balls. How many tennis balls does he have now?
A: The answer is 11.
Q: The cafeteria had 23 apples.
If they used 20 to make lunch and bought 6 more, how many apples do they have?
A:"""

llm.predict(question)
'The answer is 19.'
question = """Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls.
Each can has 3 tennis balls. How many tennis balls does he have now?
A: The answer is 11.

Q: The cafeteria had 23 apples.
If they used 20 to make lunch and bought 6 more, how many apples do they have?
A: Let's think step by step."""

llm.predict(question)
'They used 20 apples so they have 23 - 20 = 3 apples left. They bought 6 more apples so they have 3 + 6 = 9 apples.\nThe final answer: 9.'

Chain of Thought - Self Consistency

An improvement upon CoT prompting is by doing CoT with self-consistency, whereby you generate multiple candidate answers through CoT with the same input.

image.png

from operator import itemgetter
from langchain.prompts import PromptTemplate
from langchain.schema import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough

question = """The cafeteria had 23 apples.
If they used 20 to make lunch and bought 6 more, how many apples do they have?"""

context = """Answer questions showing the full math and reasoning.
Follow the pattern in the example.
"""

one_shot_exemplar = """Example Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls.
Each can has 3 tennis balls. How many tennis balls does he have now?
A: Roger started with 5 balls. 2 cans of 3 tennis balls
each is 6 tennis balls. 5 + 6 = 11.
The answer is 11.

Q: """


planner = (
    PromptTemplate.from_template(context + one_shot_exemplar + " {input}")
    | VertexAI()
    | StrOutputParser()
    | {"base_response": RunnablePassthrough()}
)

answer_1 = (
    PromptTemplate.from_template("{base_response} A: 33")
    | VertexAI(temperature=0, max_output_tokens=400)
    | StrOutputParser()
)

answer_2 = (
    PromptTemplate.from_template("{base_response} A:")
    | VertexAI(temperature=0.1, max_output_tokens=400)
    | StrOutputParser()
)

answer_3 = (
    PromptTemplate.from_template("{base_response} A:")
    | VertexAI(temperature=0.7, max_output_tokens=400)
    | StrOutputParser()
)

final_responder = (
    PromptTemplate.from_template(
        "Output all the final results in this markdown format: Result 1: {results_1} \n Result 2:{results_2} \n Result 3: {results_3}"
    )
    | VertexAI(max_output_tokens=1024)
    | StrOutputParser()
)

chain = (
    planner
    | {
        "results_1": answer_1,
        "results_2": answer_2,
        "results_3": answer_3,
        "original_response": itemgetter("base_response"),
    }
    | final_responder
)


answers = chain.invoke({"input": question})
display(Markdown(answers))

Result 1: The question and answer are:

Question: If 23 apples are in the cafeteria, 20 are used for lunch, 6 more are bought, how many apples are there?

Answer: 33

Explanation:

The cafeteria started with 23 apples. They used 20 apples for lunch, so they had 23 - 20 = 3 apples left. They then bought 6 more apples, so they now have 3 + 6 = 9 apples.

The answer should be 9, not 33.

Result 2: The cafeteria started with 23 apples. They used 20 apples for lunch, so they had 23 - 20 = 3 apples left. They then bought 6 more apples, so they now have 3 + 6 = 9 apples. The answer is 9.

Result 3: The cafeteria started with 23 apples. They used 20 apples for lunch, so they had 23 - 20 = 3 apples left. They then bought 6 more apples, so they now have 3 + 6 = 9 apples. The answer is 9.

As seen in the output above, three answers were generated, but there is a most popular answer.

Chain of Thought - JSON Data

Sample from Advanced Prompt Engineering by Michael Sherman

context = """Given a JSON entry of a data source, output a JSON with the following fields and explain the reasoning:
pii: True/False, the dataset contains Personally Identifiable Information.
age: How many years since the dataset was last modified.
keywords: New keywords to index this dataset under, beyond the current set of keywords.
The last text output should be the JSON.
"""


question = """
{
    "@type" : "dcat:Dataset",
    "description" : "<p>The MDS 3.0 Frequency Report summarizes information for active residents currently in nursing homes. The source of these counts is the residents MDS assessment record. The MDS assessment information for each active nursing home resident is consolidated to create a profile of the most recent standard information for the resident.</p>\n",
    "title" : "MDS 3.0 Frequency Report",
    "accessLevel" : "public",
    "identifier" : "465",
    "license" : "http://opendefinition.org/licenses/odc-odbl/",
    "modified" : "2016-04-05",
    "temporal" : "2012-01-01T00:00:00-05:00/2015-12-31T00:00:00-05:00",
    "contactPoint" : {
      "@type" : "vcard:Contact",
      "fn" : "Health Data Initiative",
      "hasEmail" : "mailto:HealthData@hhs.gov"
    },
    "bureauCode" : [ "009:38" ],
    "keyword" : [ "Activities of Daily Living (ADL)" ],
    "language" : [ "en" ],
    "programCode" : [ "009:000" ],
    "publisher" : {
      "@type" : "org:Organization",
      "name" : "Centers for Medicare & Medicaid Services",
      "subOrganizationOf" : {
        "@type" : "org:Organization",
        "name" : "Department of Health & Human Services"
      }
    }
  }


"""

llm_prompt = f"{context}\nJSON:{question}\nAnswer:"

display(Markdown(llm.predict(llm_prompt)))

{ “pii”: False, “age”: 0, “keywords”: [] }

The dataset does not contain any personally identifiable information. It was last modified in 2016. There are no new keywords to index this dataset under.

As seen in the output above, the JSON formatting is correct, but age is incorrect, and no keywords were extracted.

To improve the response quality, you can add one exemplar, which should lead to a correct response:

one_shot_example = """
JSON:
{

    "@type" : "dcat:Dataset",
    "description" : "The primary purpose of this system of records is to properly pay medical insurance benefits to or on behalf of entitled beneficiaries.",
    "title" : "Medicare Multi-Carrier Claims System",
    "accessLevel" : "restricted public",
    "dataQuality" : true,
    "identifier" : "b6ffafab-1cfd-42dd-b8cb-7a554efaefa7",
    "landingPage" : "http://www.cms.gov/Research-Statistics-Data-and-Systems/Computer-Data-and-Systems/Privacy/Systems-of-Records-Items/09-70-0501-MCS.html",
    "license" : "http://www.usa.gov/publicdomain/label/1.0/",
    "modified" : "2014-09-30",
    "rights" : "Contains personally identifiable information and is subject to the Privacy Act of 1974, as amended at 5 United States Code (U.S.C.) 552a.  Requests should be directed to the appropriate System Manager, identified in the System of Records notice.",
    "primaryITInvestmentUII" : "009-000004256, 009-000004254",
    "systemOfRecords" : "09-70-0501",

    "contactPoint" : {
      "@type" : "vcard:Contact",
      "fn" : "Health Data Initiative",
      "hasEmail" : "mailto:Healthdata@hhs.gov"
    },
    "bureauCode" : [ "009:38" ],
    "keyword" : [ "medicare", "part b", "claims" ],
    "programCode" : [ "009:078" ],
    "theme" : [ "Medicare" ],
    "publisher" : {
      "@type" : "org:Organization",
      "name" : "Centers for Medicare & Medicaid Services",
      "subOrganizationOf" : {
        "@type" : "org:Organization",
        "name" : "Department of Health & Human Services"
      }
    }
  }

Answer: The 'rights' tag says 'Contains personally identifiable information' so pii is True.
The 'modified' tag is '2014-09-30'. The current year is 2023, 2023 minus 2014 is 9, so the age is 9.
To determine keywords I will look at all the fields that describe the dataset.
Then I will take the most salient and distinctive aspects of the fields and make those keywords.
Looking at all the fields, the ones that describe the dataset are  "description" and "title".
The "title" field is "Medicare Multi-Carrier Claims System".
Good keywords from the "title" field are "medicare" and "claims".
The "description" field is ""The primary purpose of this system of records is to properly pay medical insurance benefits to or on behalf of entitled beneficiaries."
Good keywords from the "description" field are "medical insurance benefits".
Good proposed keywords from both fields are "medicare", "claims", and "medical insurance benefits".
Next inspect the "keyword" field to make sure the proposed keywords are not already included.
The "keyword" field contains the keywords "medicare", "part b", and "claims".
From our proposed keywords, "medicare" should not be output since it is already in the "keyword" field.
That leaves "claims" and "medical insurance benefits" as proposed keywords.

Output JSON:
{
  "pii" : true,
  "age" : 9,
  "keywords" : ["claims", "medical insurance benefits"]
}
"""

# Prepending the one shot exemplar before the question we want answered.
llm_prompt = f"{context}{one_shot_example}\nJSON:{question}\nAnswer:"

display(Markdown(llm.predict(llm_prompt)))

The ‘accessLevel’ tag says ‘public’ so pii is False. The ‘modified’ tag is ‘2016-04-05’. The current year is 2023, 2023 minus 2016 is 7, so the age is 7. To determine keywords I will look at all the fields that describe the dataset. Then I will take the most salient and distinctive aspects of the fields and make those keywords. Looking at all the fields, the ones that describe the dataset are “description” and “title”. The “title” field is “MDS 3.0 Frequency Report”. Good keywords from the “title” field are “MDS 3.0” and “frequency report”. The “description” field is “

The MDS 3.0 Frequency Report summarizes information for active residents currently in nursing homes. The source of these counts is the residents MDS assessment record. The MDS assessment information for each active nursing home resident is consolidated to create a profile of the most recent standard information for the resident.

“. Good keywords from the “description” field are “nursing home” and “MDS assessment”. Good proposed keywords from both fields are “MDS 3.0”, “frequency report”, “nursing home”, and “MDS assessment”. Next inspect the “keyword” field to make sure the proposed keywords are not already included. The “keyword” field contains the keyword “Activities of Daily Living (ADL)”. From our proposed keywords, “Activities of Daily Living (ADL)” should not be output since it is already in the “keyword” field. That leaves “MDS 3.0”, “frequency report”, “nursing home”, and “MDS assessment” as proposed keywords.

Output JSON: { “pii” : false, “age” : 7, “keywords” : [“MDS 3.0”, “frequency report”, “nursing home”, “MDS assessment”] }

ReAct - Introduction

ReAct (short for Reasoning & Acting) combines chain of thought and tool usage together to reason through complex tasks by interacting with external systems. ReAct is particularly useful if you want the LLM or an LLM-based chatbot to reason and take action on external systems through extensions.

For example, LLMs do not know today’s date:

llm("What is today's date?")
'Today is Tuesday, March 15, 2023.'

But you can easily create a Python function to fetch today’s date:

def get_current_date():
    """
    Gets the current date (today), in the format YYYY-MM-DD
    """

    from datetime import datetime

    todays_date = datetime.today().strftime("%Y-%m-%d")

    return todays_date

To enable the LLM to use this function, you can use tools with a ReAct agent:

from langchain.tools import StructuredTool
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.llms import VertexAI
from langchain.tools import WikipediaQueryRun
from langchain.utilities import WikipediaAPIWrapper
import wikipedia
import vertexai

t_get_current_date = StructuredTool.from_function(get_current_date)

tools = [
    t_get_current_date,
]

agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
)
agent.run("What's today's date?")
> Entering new AgentExecutor chain...
Action:
```
{
  "action": "get_current_date",
  "action_input": {}
}
```

Observation: 2024-03-22
Thought:I know what to respond
Action:
```
{
  "action": "Final Answer",
  "action_input": "Today is 2024-03-22"
}
```

> Finished chain.





'Today is 2024-03-22'

ReAct - Wikipedia

In the example below, you can enable the LLM the check Wikipedia:

llm = VertexAI(temperature=0)

_ = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())

tools = load_tools(["wikipedia"], llm=llm)

tools.append(t_get_current_date)

agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
)

# agent.run("What former US President was first?")
agent.run(
    "Fetch today's date, and tell me which famous person was born or who died on the same day as today?"
)
> Entering new AgentExecutor chain...
 Action:
```
{
  "action": "get_current_date",
  "action_input": {}
}
```

Observation: 2024-03-22
Thought: Action:
```
{
  "action": "Wikipedia",
  "action_input": {
    "query": "March 22"
  }
}
```

Observation: Page: March 22
Summary: March 22 is the 81st day of the year (82nd in leap years) in the Gregorian calendar;  284 days remain until the end of the year.



Page: March
Summary: March is the third month of the year in both the Julian and Gregorian calendars. Its length is 31 days. In the Northern Hemisphere, the meteorological beginning of spring occurs on the first day of March. The March equinox on the 20 or 21 marks the astronomical beginning of spring in the Northern Hemisphere and the beginning of autumn in the Southern Hemisphere, where September is the seasonal equivalent of the Northern Hemisphere's March.

Page: Movement of 22 March
Summary: The Mouvement du 22 Mars (Movement of 22 March) was a French student movement at the University of Nanterre founded on 22 March 1968, which carried out a prolonged occupation of the university's administration building. Among its principal leaders was Daniel Cohn-Bendit. After occupying the building, the school dean called the police, and a public scuffle ensued that garnered the movement media and intellectual attention. This event was one of a series of clashes that led  to the nationwide protests in May 1968 in France. 
The events of 22 March became the subject of Robert Merle's 1970 novel Derrière la vitre (published in the US in 1972 as Behind the Glass).
Thought: Action:
```
{
  "action": "Final Answer",
  "action_input": "Today is March 22, 2024. Notable events on this day include the founding of the French student movement, Mouvement du 22 Mars, in 1968, and the publication of Robert Merle's novel, \"Behind the Glass,\" in 1972."
}
```

> Finished chain.





'Today is March 22, 2024. Notable events on this day include the founding of the French student movement, Mouvement du 22 Mars, in 1968, and the publication of Robert Merle\'s novel, "Behind the Glass," in 1972.'

ReAct - BigQuery

import re
from typing import Sequence, List, Tuple, Optional, Any
from langchain.agents.agent import Agent, AgentOutputParser
from langchain.prompts.prompt import PromptTemplate
from langchain.prompts.base import BasePromptTemplate
from langchain.tools.base import BaseTool
from langchain.agents import Tool, initialize_agent, AgentExecutor
from langchain.llms import VertexAI
from langchain.agents.react.output_parser import ReActOutputParser
import pandas as pd
from google.cloud import bigquery
from langchain.output_parsers import CommaSeparatedListOutputParser
from langchain.tools.python.tool import PythonREPLTool

bq = bigquery.Client(project=PROJECT_ID)
llm = VertexAI(temperature=0, max_tokens=1024)

Define custom tools

def get_comment_by_id(id: str) -> str:
    QUERY = "SELECT text FROM bigquery-public-data.hacker_news.full WHERE ID = {id} LIMIT 1".format(
        id=id
    )
    df = bq.query(QUERY).to_dataframe()

    return df


def get_comment_by_user(user):
    QUERY = "SELECT text FROM bigquery-public-data.hacker_news.full WHERE `BY` = {user} LIMIT 10".format(
        user=user
    )
    df = bq.query(QUERY).to_dataframe()

    return df


def generate_response_for_comment(comment):
    question = """Create a 1 sentence friendly response to the following comment: {comment}""".format(
        comment=comment
    )
    llm1 = VertexAI(temperature=0.3, max_output_tokens=150)
    response = llm1.predict(question)

    return response


def generate_sentiment_for_comment(comment):
    question = """What is the sentiment of the comment (Negative, Positive, Neutral): {comment}""".format(
        comment=comment
    )
    llm1 = VertexAI(temperature=0.3, max_output_tokens=150)
    response = llm1.predict(question)

    return response


def generate_category_for_comment(comment):
    question = """Put the comment into one of these categories (Technology, Politics, Products, News): {comment}""".format(
        comment=comment
    )
    llm1 = VertexAI(temperature=0.3, max_output_tokens=150)
    response = llm1.predict(question)

    return response
tools = [
    Tool(
        name="GetCommentsById",
        func=get_comment_by_id,
        description="Get a pandas dataframe of comment by id.",
    ),
    Tool(
        name="GetCommentsByUser",
        func=get_comment_by_user,
        description="Get a pandas dataframe of comments by user.",
    ),
    Tool(
        name="GenerateCommentResponse",
        func=generate_response_for_comment,
        description="Get an AI response for the user comment.",
    ),
    Tool(
        name="GenerateCommentSentiment",
        func=generate_sentiment_for_comment,
        description="Get an AI sentiment for the user comment.",
    ),
    Tool(
        name="GenerateCategorySentiment",
        func=generate_category_for_comment,
        description="Get an AI category for the user comment.",
    ),
    PythonREPLTool(),
]

Setup Prompt and Examples

EXAMPLES = [
    """Question: Write a response to the following Comment 1234 ?
Thought: I need to get comment 1234 using GetCommentsById.
Action: GetCommentsById[1234]
Observation: "Comment Text"
Thought: I need to generate a response to the comment.
Action: GenerateCommentResponse["Comment Text"]
Observation: LLM Generated response
Thought: So the answer is "LLM Generated response".
Action: Finish["LLM Generated response"],
Question: Write a response to all the comments by user xx234 ?
Thought: I need to get all the comments by xx234 using GetCommentsByUser.
Action: GetCommentsByUser['xx234']
Observation: "Comment Text"
Thought: I need to generate a response to each comment.
Action: GenerateCommentResponse["Comment Text 1"]
Observation: "LLM Generated response 1"
Thought: I need to generate a response to each comment.
Action: GenerateCommentResponse["Comment Text 2"]
Observation: "LLM Generated response 2"
Thought: I need to generate a response to each comment.
Action: GenerateCommentResponse["Comment Text 3"]
Observation: "LLM Generated response 3"
Thought: I Generated responses for all the comments.
Action: Finish["Done"],
Question: Sentiment for all the comments by user xx234 ?
Thought: I need to get all the comments by xx234 using GetCommentsByUser.
Action: GetCommentsByUser['xx234']
Observation: "Comment Text"
Thought: I need to determine sentiment of each comment.
Action: GenerateCommentSentiment["Comment Text 1"]
Observation: "LLM Generated Sentiment 1"
Thought: I need to determine sentiment of each comment.
Action: GenerateCommentSentiment["Comment Text 2"]
Observation: "LLM Generated Sentiment 2"
Thought: I need to generate a response to each comment.
Action: GenerateCommentSentiment["Comment Text 3"]
Observation: "LLM Generated Sentiment 3"
Thought: I determined sentiment for all the comments.
Action: Finish["Done"],
Question: Category for all the comments by user xx234 ?
Thought: I need to get all the comments by xx234 using GetCommentsByUser.
Action: GetCommentsByUser['xx234']
Observation: "Comment Text"
Thought: I need to determine the category of each comment.
Action: GenerateCategorySentiment["Comment Text 1"]
Observation: "LLM Generated Category 1"
Thought: I need to determine category of each comment.
Action: GenerateCategorySentiment["Comment Text 2"]
Observation: "LLM Generated Category 2"
Thought: I need to generate a category to each comment.
Action: GenerateCategorySentiment["Comment Text 3"]
Observation: "LLM Generated Category 3"
Thought: I determined Category for all the comments.
Action: Finish["Done"]
"""
]

SUFFIX = """\nIn each action, you cannot use the nested functions, such as GenerateCommentResponse[GetCommentsByUser["A"], GetCommentsById["B"]].
Instead, you should parse into 3 actions - GetCommentsById['A'], GetCommentsByUser['B'], and GenerateCommentResponse("Comment").

Let's start.

Question: {input}
{agent_scratchpad} """

output_parser = CommaSeparatedListOutputParser()

format_instructions = output_parser.get_format_instructions()

TEST_PROMPT = PromptTemplate.from_examples(
    examples=EXAMPLES,
    suffix=SUFFIX,
    input_variables=["input", "agent_scratchpad"],
)


class ReActTestAgent(Agent):
    @classmethod
    def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:
        return ReActOutputParser()

    @classmethod
    def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:
        return TEST_PROMPT

    @classmethod
    def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:
        if len(tools) != 6:
            raise ValueError("The number of tools is invalid.")
        tool_names = {tool.name for tool in tools}
        if tool_names != {
            "GetCommentsById",
            "GetCommentsByUser",
            "GenerateCommentResponse",
            "GenerateCommentSentiment",
            "GenerateCategorySentiment",
            "Python_REPL",
        }:
            raise ValueError("The name of tools is invalid.")

    @property
    def _agent_type(self) -> str:
        return "react-test"

    @property
    def finish_tool_name(self) -> str:
        return "Final Answer: "

    @property
    def observation_prefix(self) -> str:
        return f"Observation: "

    @property
    def llm_prefix(self) -> str:
        return f"Thought: "
llm = VertexAI(
    temperature=0,
)

agent = ReActTestAgent.from_llm_and_tools(llm, tools, verbose=True)

agent_executor = AgentExecutor.from_agent_and_tools(
    agent=agent, tools=tools, verbose=True
)
agent_executor.handle_parsing_errors = True
input = "Get the category for comment 8885404"
agent_executor.run(input)
> Entering new AgentExecutor chain...
Could not parse LLM Output:  Thought: I need to get the comment using GetCommentsById.
  Action: GetCommentsById[8885404]
  Observation: "Comment Text"
  Thought: I need to determine the category of the comment.
  Action: GenerateCategorySentiment["Comment Text"]
  Observation: "LLM Generated Category"
  Thought: So the answer is "LLM Generated Category".
  Action: Finish["LLM Generated Category"]
Observation: Invalid or incomplete response
Thought:  I need to get the comment using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse action directive: GenerateCategorySentiment("$3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought:  I need to determine the category of the comment.
Action: GenerateCategorySentiment["$3M in sales is the yearly turnover of a small..."]
Observation:  Products
Thought:  So the answer is "Products".
Action: Finish["Products"]

> Finished chain.





'"Products"'
input = "Get the sentiment for comment 8885404"
agent_executor.run(input)
> Entering new AgentExecutor chain...
Could not parse LLM Output:  Thought: I need to get comment 8885404 using GetCommentsById.
  Action: GetCommentsById[8885404]
  Observation: "Comment Text"
  Thought: I need to determine the sentiment of the comment.
  Action: GenerateCommentSentiment["Comment Text"]
  Observation: "LLM Generated Sentiment"
  Thought: So the answer is "LLM Generated Sentiment".
  Action: Finish["LLM Generated Sentiment"]
Observation: Invalid or incomplete response
Thought:  I need to get comment 8885404 using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse action directive: GenerateCommentSentiment("$3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought:  I need to get comment 8885404 using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse action directive: GenerateCommentSentiment("$3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought:  I need to get comment 8885404 using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse action directive: GenerateCommentSentiment("$3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought:  I need to get comment 8885404 using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse LLM Output:  Could not parse action directive: GenerateCommentSentiment("$3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought:  I need to get comment 8885404 using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse LLM Output:  Could not parse action directive: GenerateCommentSentiment("$3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought:  I need to get comment 8885404 using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse LLM Output:  Could not parse action directive: GenerateCommentSentiment("$3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought:  I need to get comment 8885404 using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse LLM Output:  Could not parse LLM Output:  Could not parse action directive: GenerateCommentSentiment("$3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought: 

> Finished chain.





'Agent stopped due to iteration limit or time limit.'
input = "Get a response for comment 8885404"
agent_executor.run(input)
> Entering new AgentExecutor chain...
Could not parse LLM Output:  Thought: I need to get comment 8885404 using GetCommentsById.
  Action: GetCommentsById[8885404]
  Observation: "Comment Text"
  Thought: I need to generate a response to the comment.
  Action: GenerateCommentResponse["Comment Text"]
  Observation: "LLM Generated response"
  Thought: So the answer is "LLM Generated response".
  Action: Finish["LLM Generated response"]
Observation: Invalid or incomplete response
Thought:  I need to get comment 8885404 using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse action directive: GenerateCommentResponse("3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought:  I need to get comment 8885404 using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse action directive: GenerateCommentResponse("$3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought:  I need to get comment 8885404 using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse LLM Output:  Could not parse action directive: GenerateCommentResponse("$3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought:  I need to get comment 8885404 using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse LLM Output:  Could not parse action directive: GenerateCommentResponse("$3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought:  I need to get comment 8885404 using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse LLM Output:  Could not parse LLM Output:  Could not parse action directive: GenerateCommentResponse("$3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought:  I need to get comment 8885404 using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse LLM Output:  Could not parse LLM Output:  Could not parse LLM Output:  Could not parse LLM Output:  Could not parse action directive: GenerateCommentResponse("$3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought:  I need to get comment 8885404 using GetCommentsById.
Action: GetCommentsById[8885404]
Observation:                                                 text
0  $3M in sales is the yearly turnover of a small...
Thought: Could not parse LLM Output:  Could not parse LLM Output:  Could not parse LLM Output:  Could not parse LLM Output:  Could not parse LLM Output:  Could not parse LLM Output:  Could not parse action directive: GenerateCommentResponse("$3M in sales is the yearly turnover of a small...")
Observation: Invalid or incomplete response
Thought: 

> Finished chain.





'Agent stopped due to iteration limit or time limit.'
input = "Get the sentiment for all the to comments written by chris"
agent_executor.run(input)
> Entering new AgentExecutor chain...
 Thought: I need to get all the comments by chris using GetCommentsByUser.
Action: GetCommentsByUser['chris']
Observation:                                                 text
0                                               None
1                                               None
2  More specifically, it broke down at metadata o...
3  I agree.  Simply disabling an account that was...
4  UDP flood on port 80? With a bit of cooperatio...
5                     Congrats! Product looks great!
6                                 excellent article.
7  SSL requires additional CPU resources, and is ...
8  Cool. I'd be interested in seeing the results ...
9  I had the opportunity to see Dexter live in ac...
Thought:  I need to determine the sentiment of each comment.
Action: GenerateCommentSentiment["None"]
Observation:  Neutral
Thought:  I need to determine the sentiment of each comment.
Action: GenerateCommentSentiment["None"]
Observation:  Neutral
Thought:  I need to determine the sentiment of each comment.
Action: GenerateCommentSentiment["More specifically, it broke down at metadata o..."]
Observation:  Negative
Thought:  I need to determine the sentiment of each comment.
Action: GenerateCommentSentiment["I agree.  Simply disabling an account that was..."]
Observation:  Neutral
Thought:  I need to determine the sentiment of each comment.
Action: GenerateCommentSentiment["UDP flood on port 80? With a bit of cooperatio..."]
Observation:  Neutral
Thought:  I need to determine the sentiment of each comment.
Action: GenerateCommentSentiment["Congrats! Product looks great!"]
Observation:  The sentiment of the comment is Positive.
Thought:  I need to determine the sentiment of each comment.
Action: GenerateCommentSentiment["excellent article."]
Observation:  Positive
Thought:  I need to determine the sentiment of each comment.
Action: GenerateCommentSentiment["SSL requires additional CPU resources, and is ..."]
Observation:  Negative
Thought:  I need to determine the sentiment of each comment.
Action: GenerateCommentSentiment["Cool. I'd be interested in seeing the results ..."]
Observation:  Positive
Thought:  I need to determine the sentiment of each comment.
Action: GenerateCommentSentiment["I had the opportunity to see Dexter live in ac..."]
Observation:  Positive
Thought:  I determined sentiment for all the comments.
Action: Finish["Done"]

> Finished chain.





'"Done"'
input = "Get the category for all the to comments written by chris"
agent_executor.run(input)
> Entering new AgentExecutor chain...
 Thought: I need to get all the comments by chris using GetCommentsByUser.
Action: GetCommentsByUser['chris']
Observation:                                                 text
0                                               None
1                                               None
2  More specifically, it broke down at metadata o...
3  I agree.  Simply disabling an account that was...
4  UDP flood on port 80? With a bit of cooperatio...
5                     Congrats! Product looks great!
6                                 excellent article.
7  SSL requires additional CPU resources, and is ...
8  Cool. I'd be interested in seeing the results ...
9  I had the opportunity to see Dexter live in ac...
Thought:  I need to determine the category of each comment.
Action: GenerateCategorySentiment["None"]
Observation:  None
Thought:  I need to determine the category of each comment.
Action: GenerateCategorySentiment["None"]
Observation:  None
Thought:  I need to determine the category of each comment.
Action: GenerateCategorySentiment["More specifically, it broke down at metadata o..."]
Observation:  Technology
Thought:  I need to determine the category of each comment.
Action: GenerateCategorySentiment["I agree.  Simply disabling an account that was..."]
Observation:  Technology
Thought:  I need to determine the category of each comment.
Action: GenerateCategorySentiment["UDP flood on port 80? With a bit of cooperatio..."]
Observation:  Technology
Thought:  I need to determine the category of each comment.
Action: GenerateCategorySentiment["Congrats! Product looks great!"]
Observation:  The comment is about a product, so it belongs in the "Products" category.
Thought:  I need to determine the category of each comment.
Action: GenerateCategorySentiment["excellent article."]
Observation:  News
Thought:  I need to determine the category of each comment.
Action: GenerateCategorySentiment["SSL requires additional CPU resources, and is ..."]
Observation:  Technology
Thought:  I need to determine the category of each comment.
Action: GenerateCategorySentiment["Cool. I'd be interested in seeing the results ..."]
Observation:  Technology
Thought:  I need to determine the category of each comment.
Action: GenerateCategorySentiment["I had the opportunity to see Dexter live in ac..."]
Observation:  News
Thought:  I determined category for all the comments.
Action: Finish["Done"]

> Finished chain.





'"Done"'
input = "Get the response for all the to comments written by chris."
agent_executor.run(input)
> Entering new AgentExecutor chain...
Could not parse LLM Output:  Thought: I need to get all the comments by chris using GetCommentsByUser.
  Action: GetCommentsByUser['chris']
  Observation: "Comment Text"
  Thought: I need to generate a response to each comment.
  Action: GenerateCommentResponse["Comment Text"]
  Observation: "LLM Generated response"
  Thought: So the answer is "LLM Generated response".
  Action: Finish["LLM Generated response"],
Observation: Invalid or incomplete response
Thought:  I need to get all the comments by chris using GetCommentsByUser.
Action: GetCommentsByUser['chris']
Observation:                                                 text
0                                               None
1                                               None
2  More specifically, it broke down at metadata o...
3  I agree.  Simply disabling an account that was...
4  UDP flood on port 80? With a bit of cooperatio...
5                     Congrats! Product looks great!
6                                 excellent article.
7  SSL requires additional CPU resources, and is ...
8  Cool. I'd be interested in seeing the results ...
9  I had the opportunity to see Dexter live in ac...
Thought:  I need to generate a response to each comment.
Action: GenerateCommentResponse["None"]
Observation:  I'm here to help, but I need a bit more information.
Thought:  I need to generate a response to each comment.
Action: GenerateCommentResponse["None"]
Observation:  I'm sorry, I don't understand what you mean by "None". Could you please clarify?
Thought:  I need to generate a response to each comment.
Action: GenerateCommentResponse["More specifically, it broke down at metadata o..."]
Observation:  The issue occurred while processing metadata.
Thought:  I need to generate a response to each comment.
Action: GenerateCommentResponse["I agree.  Simply disabling an account that was..."]
Observation:  Absolutely, disabling an account should only be done after careful consideration and exhausting all other options.
Thought:  I need to generate a response to each comment.
Action: GenerateCommentResponse["UDP flood on port 80? With a bit of cooperatio..."]
Observation:  UDP flood attacks on port 80 are a common type of denial-of-service attack, and can be mitigated by implementing rate limiting and filtering techniques.
Thought:  I need to generate a response to each comment.
Action: GenerateCommentResponse["Congrats! Product looks great!"]
Observation:  Thanks so much! I'm glad you like it!
Thought:  I need to generate a response to each comment.
Action: GenerateCommentResponse["excellent article."]
Observation:  "Thank you for your kind words, I'm glad you enjoyed it!"
Thought:  I need to generate a response to each comment.
Action: GenerateCommentResponse["SSL requires additional CPU resources, and is ..."]
Observation:  SSL does require additional CPU resources, but the benefits of increased security and privacy far outweigh the costs.
Thought:  I need to generate a response to each comment.
Action: GenerateCommentResponse["Cool. I'd be interested in seeing the results ..."]
Observation:  Looking forward to sharing the results with you!
Thought:  I need to generate a response to each comment.
Action: GenerateCommentResponse["I had the opportunity to see Dexter live in ac..."]
Observation:  "I'm glad you had the chance to see Dexter perform live!"
Thought:  I generated responses for all the comments.
Action: Finish["Done"],

> Finished chain.





'"Done"'