Skip to content

A simple .net6 library enabling the power of OpenAI in your application.

License

Notifications You must be signed in to change notification settings

GurYN/CoreAssistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CoreAssistant

A simple library enabling the power of ChatGPT & others services from OpenAI in your application.

The library is in active development, stay connected to get new features!

Build library Nuget

Playground examples

Small app examples are provided to test the library :

A simple console app allowing you to interact in a chat mode ConsoleChat

To test ConsoleChat example:

cd playground
cd ConsoleChat

dotnet restore
dotnet run

A web app allowing you to generate title, description, & image of a product based on keywords SmartShop

To test SmartShop example:

1/ Duplicate the appsettings.json file and rename it as appsettings.Development.json

2/ Update the parameter ApiKey in the new file created with your OpenAI Api key

3/ Launch the app

cd playground
cd SmartShop

dotnet restore
dotnet run

# open your browser to the url displayed

Install the library

You can install the library using nuget

dotnet add package VinciDev.CoreAssistant

Quick Start

You can use the library directly or using dependency injection.

1/ Using directly

using CoreAssistant;
using CoreAssistant.Models;

...

var options = new CoreAssistantOptions("YOUR OPENAI API KEY");
var assistant = new Assistant(options);

// Get an answer from ChatGPT Api
var question = new ChatQuestion("You question");
var answer = await assistant.Chat.AskForSomething(question);

Console.WriteLine(answer.Content);

// Generate an image with Dall-E Api
var prompt = new ImagePrompt("A black cat walking on a street during the night");
var result = await assistant.Image.Generate(prompt);

Console.WriteLine($"Url of the image: {result.Url}");

2/ Using dependency injection

In your Program.cs :

using CoreAssistant.Extensions;

...

builder.Services.AddCoreAssistant(options => { 
    options.ApiKey = "YOU OPENAI API KEY"
});

Warning : Do not store your API key in source code. Use appsettings.json instead.

In a class of your project :

using CoreAssistant;
using CoreAssistant.Models;

public class ClassName
{
    private readonly Assistant _assistant;

    public ClassName(Assistant assistant)
    {
        _assistant = assistant;
    }

    // Get a ChatGPT answer
    public async Task<string> GetAnswer(string query)
    {
        var question = new ChatQuestion(query);
        var answer = await _assistant.Chat.AskForSomething(question);

        return answer.Content;
    }

    // Get a Dall-E image
    public async Task<string> GenerateImage(string query)
    {
        var prompt = new ImagePrompt(query);
        var result = await _assistant.Image.Generate(prompt);

        return result.Url;
    }
}

Advanced use

History context

The library will keep the conversation history during the lifecycle of your CoreAssistant instance. You can then ask any question and get answers based on the entire history (like ChatGPT website).

Default Context

You can define a default context to specialize the answers of your assistant. To do so, just add the default context in your CoreAssistantOptions object.

Ex:

var options = 
    new CoreAssistantOptions("YOUR OPENAI API KEY") {
        DefaultContext = "YOUR DEFAULT CONTEXT"
    };

Async vs Stream for Chat answer

You can use async or stream method to receive an answer. To do so, just call the right method based on your desired result.

Ex:

using CoreAssistant;
using CoreAssistant.Models;

...

var options = new CoreAssistantOptions("YOUR OPENAI API KEY");
var assistant = new Assistant(options);
var question = new ChatQuestion("You question");

# Async call
var answer = await assistant.Chat.AskForSomething(question);
Console.WriteLine(answer.Content);

# Stream call
var stream = assistant.Chat.AskForSomethingAsStream(question);
await foreach (var item in stream)
{
    Console.Write(item.Content);
}

ChatGPT Model

You can define the GPT model used by the library. To do so, set it when calling AskForSomething() or AskForSomethingAsStream() method.

Note: GPT-4 model access is restricted, join the waiting list to access it.

Ex:

using CoreAssistant;
using CoreAssistant.Models;

...

var question = new ChatQuestion("Your question");

# Async call
var answer = await assistant.Chat.AskForSomething(question, ChatModel.GPT3_5);

# Stream call
var stream = assistant.Chat.AskForSomethingAsStream(question, ChatModel.GPT4);