gpt4all: C# NuGet package: Model format not supported (no matching implementation found)

Bug Report

While using the nuget package in a clean project I keep get the error “Model format not supported (no matching implementation found)”. I tried using new Gpt4AllModelFactory() without parameters, with a path to libllmodel.dll itself and to the folder with the dll as well. The models I am using were tested in the Chat UI 2.7.1.

Example Code

using Gpt4All;

var modelFactory = new Gpt4AllModelFactory("D:\\Repositories\\GPT4ALL_Test\\bin\\Debug\\net8.0\\runtimes\\win-x64\\native");

var modelPath = "C:\\Users\\Person\\AppData\\Local\\nomic.ai\\GPT4All\\nous-hermes-llama2-13b.Q4_0.gguf";
var prompt = "Hello there";

using var model = modelFactory.LoadModel(modelPath);

var result = await model.GetStreamingPredictionAsync(prompt, PredictRequestOptions.Defaults);

await foreach (var token in result.GetPredictionStreamingAsync())
{
    Console.Write(token);
}

Steps to Reproduce

1.Create clean VS Studio 2022 C# Console App project 2.Install nuget Package with NuGet\Install-Package Gpt4All -Version 0.6.4-alpha 3.Use code above (with own model and lib paths, or empty lib path). 4.Error happens when LoadModel is called.

Expected Behavior

The prompt is processed and a response is given.

Your Environment

  • Bindings version (e.g. “Version” from pip show gpt4all): 2.7.0
  • Operating System: Windows 11
  • Chat model used (if applicable): nous-hermes-llama2-13b.Q4_0.gguf and gpt4all-falcon-newbpe-q4_0.gguf

Untitled

About this issue

  • Original URL
  • State: open
  • Created 4 months ago
  • Reactions: 1
  • Comments: 25 (4 by maintainers)

Most upvoted comments

Still getting the error on x86 and x64

I have the same problem with em_german_mistral_v01.Q4_0.gguf and using the most recent GPT4All NuGet package 0.6.4 alpha

  • Debug x64
  • Visual Studio 2022

What kind of platform did you use for build? AnyCpu? If yes please change it to x64 or what is your platform. But not anycpu

I tried x64 and x86 and it gives the same error

Hi again, since i need to use the models in C# and integrate it into a personal project, do you know if there is an alternative or another method to do it? Or will we just have to wait until it is resolved?

You can still checkout the last version of GPT4All that is currently supported by the C# bindings - I believe that would be commit c13202a.

Then you can follow the build instructions.

I did this, used the exact commit mentioned, and after all that, the error is still the same 😦

I’m trying to use mistral-7b-instruct-v0.1.Q4_0.gguf

Edit: Ran git submodule update --init since I wasn’t sure if it was using the right versions of the submodules after checking out the older commit. Got “Submodule path ‘gpt4all-backend/llama.cpp-mainline’: checked out ‘7d4ced850548642b9a1740fa25ecdef249fbf47f’” so it seemed to do something. But the issue remains the same.

I noticed I’m getting this error at the end of build:

mingw32-make[2]: Leaving directory ‘C:/Dev/gpt4all/gpt4all-bindings/csharp/runtimes/win-x64/build/mingw’ [ 68%] Built target generated_shaders mingw32-make[1]: Leaving directory ‘C:/Dev/gpt4all/gpt4all-bindings/csharp/runtimes/win-x64/build/mingw’ mingw32-make: *** [Makefile:138: all] Error 2

That’s all it gives me. No error message from whatever command failed. But way further up there’s this error, which may or may not be related:

In file included from C:/Dev/gpt4all/gpt4all-backend/llama.cpp-mainline/kompute/src/include/kompute/Core.hpp:4, from C:/Dev/gpt4all/gpt4all-backend/llama.cpp-mainline/kompute/src/include/kompute/Manager.hpp:7, from C:\Dev\gpt4all\gpt4all-backend\llama.cpp-mainline\kompute\src\Manager.cpp:3: C:/Dev/gpt4all/gpt4all-bindings/csharp/runtimes/win-x64/build/mingw/_deps/vulkan_header-src/include/vulkan/vulkan.hpp: In instantiation of ‘T vk::DynamicLoader::getProcAddress(const char*) const [with T = void (* ()(VkInstance_T, const char*))()]’: C:\Dev\gpt4all\gpt4all-backend\llama.cpp-mainline\kompute\src\Manager.cpp:182:64: required from here C:/Dev/gpt4all/gpt4all-bindings/csharp/runtimes/win-x64/build/mingw/_deps/vulkan_header-src/include/vulkan/vulkan.hpp:12081:14: error: cast between incompatible function types from ‘FARPROC’ {aka ‘long long int ()()'} to 'void ( ()(VkInstance_T, const char*))()’ [-Werror=cast-function-type] 12081 | return ( T )::GetProcAddress( m_library, function ); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The DLL files are created fine, though.

Edit2: Finally got it working by building the natives with MSVC.

Hi again, since i need to use the models in C# and integrate it into a personal project, do you know if there is an alternative or another method to do it? Or will we just have to wait until it is resolved?

You can still checkout the last version of GPT4All that is currently supported by the C# bindings - I believe that would be commit c13202a6f5f90094629cc6e214a2a4ccd91ccb74.

Then you can follow the build instructions.

Hi again, since i need to use the models in C# and integrate it into a personal project, do you know if there is an alternative or another method to do it? Or will we just have to wait until it is resolved?

what is your .net version?

8.0 in my case.