-
Notifications
You must be signed in to change notification settings - Fork 643
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Query failed: 'NoneType' object is not iterable" Error when starting Verba Chat #174
Comments
Same here. |
Have you tried to add documents? |
@Badhansen try clicking the greyed out button under "Select an Embedder". That should allow you to select an embedder and should fix that issue. I still get the |
@cha0s It's working. Thank you. |
@cha0s Did you manage to solve your issue after embedding? |
@zotttttttt After embedding, it's working. Now I can successfully load the document. |
Sadly, no, it doesn't work for me. If I can't find some other solution that works, I may try to debug it. |
Hi! @cha0s Can you follow the below steps? I think it will work for you as well.
Hope this time it will work for you. Thanks. |
Let me know if this fix helps for now! I'm looking into debugging this |
@thomashacker It's working |
I notice that every time I refresh it forces me back to GPT3, even though I only put a llama model and no OpenAI key: That seems like a bug and a possible culprit. I always set it back to ollama before I test. It persists as "Ollama" until next time I f5. My docker compose looks like:
(Yes, the ollama URL works, I use it for other AI apps I am researching.) It would help if the logs actually had a backtrace for instance. My log:
This project looks so interesting! It's a shame that it's broken for me with little clue of where to start looking. |
I have the same problem with NoneType. Ollama is running through docker and Verba is running through docker. I tried different models in Ollama, but I still get an error. |
Watching the new issues, I believe my error may be related to #184 After seeing this I cleared all my documents and tried to embed them again. This time it worked. I suspect when they were first embedded they were not using the ollama model. I believe that may have been why |
I had the same issue but then realized that autocorrect messed up when I wrote llama3 and instead, it wrote llame3. Please make sure that you have set the model correctly:
Then try again. |
@cha0s I'm glad to know that it's working for you. |
Thanks a lot everyone for the feedback, we'll make sure to update the README and make the error logs more useful! |
Same problem here - doesn't work, get the same type error. Can't upload docs - just getting errors that mean nothing. Running in docker, documentation disjointed, i've wasted enough time. |
Hello! @Benniepie, running in Docker has some issues. You can check using virtual env, and it's working. |
Virtual env option doesnt work on windows machine due to embedded db problem with windows. |
Hi! @zbalsara21 and @Benniepie, you can use this PR. I think the issue is fixed with this PR. Thanks. PR Link: #204 |
Hello, I had this error but I've finally managed to fix it. I got it to work with OpenAI GPT3 and Ollama (llama3 and mxbai-embed-large) The problem for me was that Verba wasn't able to reach a LLM. For GPT3 I needed to add For Ollama I found that I just hadn't pulled the images. I had Ollama in a docker compose with weaviate and verba so had to manually pull the images. I added a persistent volume to pull into.
Then ran
You could add the command directly in the docker compose yaml though. Now I can add docs and use the chat with either model with no errors. If this is common, maybe a validation checking if the all models needed are available would be good :) |
I was able to solve this issue in this following steps: Installing also embed language (if you are using ollama you need this installed) Later add this in your .env It was simple but in the Youtube Tutorial is not saying that is needed to specify OLLAMA_EMBED_MODEL |
unsubscribe
|
@fcanfora Thanks! You're definitely right, adding a validation step seems super useful here, we'll add it to the list 🚀 |
Description
Application is up and running, but Verba Chat is not working. It is showing "Something went wrong: 'NoneType' object is not iterable", although Verba variables are available.
If I look at the logs, I see the following:. Included some lines above and below for context.
Is this a bug or a feature?
Steps to Reproduce
Follow the steps,
set
.env
and run the project.The text was updated successfully, but these errors were encountered: