Assemble Your Own LLM Application through Streamlit and Prompt Engineering Tactics (Part2)

·

3 min read

Assemble Your Own LLM Application through Streamlit and Prompt Engineering Tactics (Part2)

Summarization

               text = st.text_area("")


                no_of_words = st.number_input("", min_value=0, max_value=100, value=30, step=1)

                if st.button('Proceed'):


                    prompt = f"""
                    Summarize the given text in the triple backticks ''' into {no_of_words} words
                    '''{text}'''
                    """

                    # Message 
                    system_message = {'role':'system','content':'Provide your own sys message'}
                    human_message  = {'role':'user','content':prompt}

This code creates a Streamlit application for text summarization. Users input text and choose a word limit. If 'Proceed' is clicked, a prompt is created for the LLM model, instructing it to summarize the text within the word limit. The prompt and a system message are sent to the model for processing.

Result of User Interface

Table QA System



                   prompt = f"""
                        Table Provided :
                        {data}

                        Instruction :
                        1. Based on user provided question which is delimited with triple backticks.
                        2. Check if given question related table then provide answer

                        Question:
                        '''{question}'''
                        """

                        # Message 
                        system_message = {'role':'system','content':'Provide your own sys message'}

                        human_message  = {'role':'user','content':prompt}

                        message = [system_message,human_message]


                        response = response.get_completion(message=message)

This code creates a Streamlit application for a table QA system. Users input questions and if 'Proceed' is clicked, a prompt is created for the LLM model, instructing it to answer based on the given table. The prompt and a system message are sent to the model for processing.

Result of User Interface

QA System

prompt = f"""
                    Information: 
                    {content}

                    question : 
                    {question}

                    Instruction :                    
                    1. Based on user provided information and question which is enclosed with curly braces.
                    2. Answer should be given from provided information only.
                    """ 

                    # Message 
                    system_message = {'role':'system','content':'Give your own sys message'}

                    human_message  = {'role':'user','content':f'{prompt}'}

                    message = [system_message,human_message]

                    response = response.get_completion(message=message)

This code creates a Streamlit application for a QA system. Users input information and a question. If 'Proceed' is clicked, a prompt is created for the LLM model, instructing it to answer based on the given information within a 20-word limit. The prompt and a system message are sent to the model for processing.

Result of User Interface

Note : If you haven't read the first part of this blog, it's like trying to watch a movie with a blindfold on. You'll miss all the action! So, better take off that blindfold and check out part one for the full picture.

Summary

This article discusses the creation of various Streamlit applications, including a text summarization tool, a table QA system, and a regular QA system. The author explains the use of triple backticks in prompts to prevent prompt injection problems. They also share the challenges encountered during the prompting process and how they refined the prompts to overcome these. The article further details the issues faced when the token limit exceeded 4096, leading to the implementation of VectorDB. The author explains what VectorDB is, its process, and how it helps to reduce the token count and avoid token errors. They also discuss Cosine Similarity, a metric used to determine the similarity between data objects.