Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Query failed: 'NoneType' object is not iterable" Error when starting Verba Chat #174

Closed
1 task done
Badhansen opened this issue May 20, 2024 · 26 comments
Closed
1 task done
Labels
investigating Bugs that are still being investigated whether they are valid

Comments

@Badhansen
Copy link
Contributor

Badhansen commented May 20, 2024

Description

Application is up and running, but Verba Chat is not working. It is showing "Something went wrong: 'NoneType' object is not iterable", although Verba variables are available.

If I look at the logs, I see the following:. Included some lines above and below for context.

INFO:     127.0.0.1:64363 - "POST /api/set_config HTTP/1.1" 200 OK
INFO:     ('127.0.0.1', 64374) - "WebSocket /ws/generate_stream" [accepted]
INFO:     connection open
INFO:     127.0.0.1:64363 - "POST /api/suggestions HTTP/1.1" 200 OK
✔ Received query: What is verba?
⚠ Query failed: 'NoneType' object is not iterable
INFO:     127.0.0.1:64386 - "POST /api/query HTTP/1.1" 200 OK

Is this a bug or a feature?

  • Bug

Steps to Reproduce

Follow the steps,
Screenshot 2024-05-20 at 1 20 20 AM
Issue
set .env and run the project.

@cha0s
Copy link

cha0s commented May 20, 2024

Same here.

@zotttttttt
Copy link

Have you tried to add documents?

@Badhansen
Copy link
Contributor Author

Have you tried to add documents?

I have tried it but no luck. I have attached an image of this issue.

(INFO) Importing...
(INFO) Importing 1 files with BasicReader
(INFO) Importing Building REST APIs with Flask.pdf
(SUCCESS) Loaded 1 documents in 0.7s
(INFO) Starting Chunking with TokenChunker
(SUCCESS) Chunking completed with 102 chunks in 0.1s
(INFO) Starting Embedding with ADAEmbedder
(ERROR) Embedding not successful: Chunk mismatch for fdf24eb7-2c09-439b-b7ce-4169a3c1e49f 0 != 102
(ERROR) Chunk mismatch for fdf24eb7-2c09-439b-b7ce-4169a3c1e49f 0 != 102
Screenshot 2024-05-20 at 12 19 52 PM

@cha0s
Copy link

cha0s commented May 20, 2024

I added a document successfully. My test involved just uploading a PDF and then trying to ask a simple question about it. Got the error when trying to chat.

I do notice that #171 and #167 seem to be the very same issue.

@cha0s
Copy link

cha0s commented May 20, 2024

@Badhansen try clicking the greyed out button under "Select an Embedder". That should allow you to select an embedder and should fix that issue.

I still get the Query failed: 'NoneType' object is not iterable problem even after successful embedding.

@Badhansen
Copy link
Contributor Author

Badhansen commented May 20, 2024

@cha0s It's working. Thank you.

Screenshot 2024-05-20 at 6 45 23 PM

@Badhansen
Copy link
Contributor Author

@cha0s Did you manage to solve your issue after embedding?

@Badhansen
Copy link
Contributor Author

@zotttttttt After embedding, it's working. Now I can successfully load the document.

@cha0s
Copy link

cha0s commented May 20, 2024

Sadly, no, it doesn't work for me. If I can't find some other solution that works, I may try to debug it.

@Badhansen
Copy link
Contributor Author

Hi! @cha0s Can you follow the below steps? I think it will work for you as well.

  1. Check if llama3 is installed and running in the background or not. You can check the image; it's running and answering questions.
Screenshot 2024-05-20 at 7 01 48 PM
  1. Set up the .env file from the .env.example file.
  2. Re-install it again by using this command.
pip install -e .
  1. As you have already mentioned, select the embedding option.

Hope this time it will work for you. Thanks.

@thomashacker
Copy link
Collaborator

Let me know if this fix helps for now! I'm looking into debugging this

@thomashacker thomashacker added the investigating Bugs that are still being investigated whether they are valid label May 20, 2024
@Badhansen
Copy link
Contributor Author

@thomashacker It's working

@cha0s
Copy link

cha0s commented May 21, 2024

  1. llama is running, the embedder is working fine
  2. I'm using docker compose
  3. ''
  4. Embedding works fine. Chat is broken.

image

I notice that every time I refresh it forces me back to GPT3, even though I only put a llama model and no OpenAI key:

image

That seems like a bug and a possible culprit. I always set it back to ollama before I test. It persists as "Ollama" until next time I f5.

My docker compose looks like:

    environment:
      - WEAVIATE_URL_VERBA=http://weaviate:8080
      - OLLAMA_URL=<MY OLLAMA URL>
      - OLLAMA_MODEL=llama3

(Yes, the ollama URL works, I use it for other AI apps I am researching.)

It would help if the logs actually had a backtrace for instance. My log:

May 21 06:11:42 HOSTNAME docker-compose[2950244]: verba_1     | INFO:     IP:59578 - "POST /api/suggestions HTTP/1.1" 200 OK
May 21 06:11:42 HOSTNAME docker-compose[2950244]: verba_1     | ✔ Received query: MY QUERY
May 21 06:11:42 HOSTNAME docker-compose[2950244]: verba_1     | ⚠ Query failed: 'NoneType' object is not iterable
May 21 06:11:42 HOSTNAME docker-compose[2950244]: verba_1     | INFO:     IP:59578 - "POST /api/query HTTP/1.1" 200 OK

This project looks so interesting! It's a shame that it's broken for me with little clue of where to start looking.

@AntipatternCorp
Copy link

I have the same problem with NoneType. Ollama is running through docker and Verba is running through docker. I tried different models in Ollama, but I still get an error.

@cha0s
Copy link

cha0s commented May 21, 2024

Watching the new issues, I believe my error may be related to #184

After seeing this I cleared all my documents and tried to embed them again. This time it worked. I suspect when they were first embedded they were not using the ollama model.

I believe that may have been why f5'ing the page kept resetting the generator. That also seems to have gone away after re-embedding the documents.

@eddieespinal
Copy link

I had the same issue but then realized that autocorrect messed up when I wrote llama3 and instead, it wrote llame3. Please make sure that you have set the model correctly:

OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3

Then try again.

@Badhansen
Copy link
Contributor Author

Watching the new issues, I believe my error may be related to #184

After seeing this I cleared all my documents and tried to embed them again. This time it worked. I suspect when they were first embedded they were not using the ollama model.

I believe that may have been why f5'ing the page kept resetting the generator. That also seems to have gone away after re-embedding the documents.

@cha0s I'm glad to know that it's working for you.

@thomashacker
Copy link
Collaborator

Thanks a lot everyone for the feedback, we'll make sure to update the README and make the error logs more useful!

@Benniepie
Copy link

Same problem here - doesn't work, get the same type error. Can't upload docs - just getting errors that mean nothing. Running in docker, documentation disjointed, i've wasted enough time.

@Badhansen
Copy link
Contributor Author

Hello! @Benniepie, running in Docker has some issues. You can check using virtual env, and it's working.

@zbalsara21
Copy link

zbalsara21 commented May 28, 2024

Virtual env option doesnt work on windows machine due to embedded db problem with windows.
so we are forced to use docker. is there any solution to this error? Are there folks using docker and not having issues?

@Badhansen
Copy link
Contributor Author

Hi! @zbalsara21 and @Benniepie, you can use this PR. I think the issue is fixed with this PR. Thanks.

PR Link: #204

@fcanfora
Copy link

fcanfora commented Jun 2, 2024

Hello, I had this error but I've finally managed to fix it. I got it to work with OpenAI GPT3 and Ollama (llama3 and mxbai-embed-large)

The problem for me was that Verba wasn't able to reach a LLM. For GPT3 I needed to add OPENAI_BASE_URL=https://api.openai.com/v1 to my environment.

For Ollama I found that I just hadn't pulled the images. I had Ollama in a docker compose with weaviate and verba so had to manually pull the images. I added a persistent volume to pull into.

  ollama:
    image: ollama/ollama:latest
    volumes:
     - YOUR_DATA_DIR/ollama_data:/root/.ollama
    ports:
      - 11434:11434

Then ran

docker exec -it verba-ollama-1 ollama pull mxbai-embed-large
docker exec -it verba-ollama-1 ollama pull llama3

You could add the command directly in the docker compose yaml though.

Now I can add docs and use the chat with either model with no errors.

If this is common, maybe a validation checking if the all models needed are available would be good :)

@lucasrocha7111
Copy link

I was able to solve this issue in this following steps:

Installing also embed language (if you are using ollama you need this installed)
ollama pull mxbai-embed-large

Later add this in your .env
OLLAMA_EMBED_MODEL=mxbai-embed-large

It was simple but in the Youtube Tutorial is not saying that is needed to specify OLLAMA_EMBED_MODEL

@13777469818
Copy link

13777469818 commented Jun 5, 2024 via email

@thomashacker
Copy link
Collaborator

@fcanfora Thanks! You're definitely right, adding a validation step seems super useful here, we'll add it to the list 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
investigating Bugs that are still being investigated whether they are valid
Projects
None yet
Development

No branches or pull requests