gpt4all: C# NuGet package: Model format not supported (no matching implementation found)
Bug Report
While using the nuget package in a clean project I keep get the error “Model format not supported (no matching implementation found)”. I tried using new Gpt4AllModelFactory() without parameters, with a path to libllmodel.dll itself and to the folder with the dll as well. The models I am using were tested in the Chat UI 2.7.1.
Example Code
using Gpt4All;
var modelFactory = new Gpt4AllModelFactory("D:\\Repositories\\GPT4ALL_Test\\bin\\Debug\\net8.0\\runtimes\\win-x64\\native");
var modelPath = "C:\\Users\\Person\\AppData\\Local\\nomic.ai\\GPT4All\\nous-hermes-llama2-13b.Q4_0.gguf";
var prompt = "Hello there";
using var model = modelFactory.LoadModel(modelPath);
var result = await model.GetStreamingPredictionAsync(prompt, PredictRequestOptions.Defaults);
await foreach (var token in result.GetPredictionStreamingAsync())
{
Console.Write(token);
}
Steps to Reproduce
1.Create clean VS Studio 2022 C# Console App project 2.Install nuget Package with NuGet\Install-Package Gpt4All -Version 0.6.4-alpha 3.Use code above (with own model and lib paths, or empty lib path). 4.Error happens when LoadModel is called.
Expected Behavior
The prompt is processed and a response is given.
Your Environment
- Bindings version (e.g. “Version” from
pip show gpt4all
): 2.7.0 - Operating System: Windows 11
- Chat model used (if applicable): nous-hermes-llama2-13b.Q4_0.gguf and gpt4all-falcon-newbpe-q4_0.gguf
About this issue
- Original URL
- State: open
- Created 4 months ago
- Reactions: 1
- Comments: 25 (4 by maintainers)
Still getting the error on x86 and x64
I have the same problem with em_german_mistral_v01.Q4_0.gguf and using the most recent GPT4All NuGet package 0.6.4 alpha
I tried x64 and x86 and it gives the same error
I did this, used the exact commit mentioned, and after all that, the error is still the same 😦
I’m trying to use mistral-7b-instruct-v0.1.Q4_0.gguf
Edit: Ran git submodule update --init since I wasn’t sure if it was using the right versions of the submodules after checking out the older commit. Got “Submodule path ‘gpt4all-backend/llama.cpp-mainline’: checked out ‘7d4ced850548642b9a1740fa25ecdef249fbf47f’” so it seemed to do something. But the issue remains the same.
I noticed I’m getting this error at the end of build:
That’s all it gives me. No error message from whatever command failed. But way further up there’s this error, which may or may not be related:
The DLL files are created fine, though.
Edit2: Finally got it working by building the natives with MSVC.
You can still checkout the last version of GPT4All that is currently supported by the C# bindings - I believe that would be commit c13202a6f5f90094629cc6e214a2a4ccd91ccb74.
Then you can follow the build instructions.
Hi again, since i need to use the models in C# and integrate it into a personal project, do you know if there is an alternative or another method to do it? Or will we just have to wait until it is resolved?
8.0 in my case.