Skip to content

This is a Python module used to create a semantic kernel in your OpenAI API compatible chat applications.

License

Notifications You must be signed in to change notification settings

perpendicularai/SeKernel_for_LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

71 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🃏 SeKernel_for_LLM

This is a Python module used to create a semantic kernel in your openai api compatible chat applications.

🍬 Features

  • In-chat memory
  • Internet-search
  • Database querying

⚙️ How to:

  • Clone the repo and import the modules into your project. Ensure that it is in the project directory.
  • import kernel
    import plugins
    
    ### INTERNET-SEARCH ###
    # Define search plugin
    search_prompt = plugins.searchPlugin(output=question) # If context equals None, use the Chat template. See `kernel.py` for more templates.
    
    # Initialize the kernel
    data = kernel.shopTemplate(prompt=prompt, plugin=plugins.defaultPlugin(), context=search_prompt or context=None # Where no context is provided, and so you may assume the AI assistant to not have any awareness of information of events that took place after the date until which it's training data is up until) # See plugins.py module for more plugins
    
    ### DATABASE ###
    # Using this database plugin
    # Initialize the database plugin
    db = plugins.dbConn()
    
    # Use the database plugin along with the dbChatPlugin
    data = kernel.chatTemplate(prompt=prompt, plugin=plugins.dbChatPlugin())
    
    # Excuting the query
    db.execute(response)
    
    # Getting the output
    response = db.fetchall()
    
    ### LlamaCpp ###
    # Parsing the kernel model to LlamaCpp
    LlamaCpp
    client = Llama(
          model_path=kernel.model() # Make sure to add your GGUF model in the kernel module.
    )
    
    # Use the kernel and set messages parameter equal to data. Depending on your LLM API defintion, messages may be a different parameter, in this case it is messages, as defined in the OpenAI API definition.
    output = client.create_chat_completions(
      messages = data
    )
    See OpenAI API reference for more.
  • # You may then append any new content and/or messages to the kernel
    data.append(new_message)

📽️ Short Films

See examples of using the SeKernel_for_LLM with 🦙 LlamaCpp Python bindings

The square-root of 2

chat_with_history.mp4

Internet search with Google for the price of leggings

SeKernel_internet_search.mp4

Database query

db_query_demo.mp4

I hope this helps someone starting out. :octocat: Happy prompting!!!

About

This is a Python module used to create a semantic kernel in your OpenAI API compatible chat applications.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages