Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add regex to url before scraping #4174

Open
wants to merge 2 commits into
base: mealie-next
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 8 additions & 1 deletion mealie/services/scraper/scraper.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
from enum import Enum
from uuid import uuid4
from re import search as regex_search

from fastapi import HTTPException, status
from slugify import slugify
Expand Down Expand Up @@ -31,7 +32,13 @@ async def create_from_url(url: str, translator: Translator) -> tuple[Recipe, Scr
Recipe: Recipe Object
"""
scraper = RecipeScraper(translator)
new_recipe, extras = await scraper.scrape(url)

extracted_url = regex_search(r"(https?://|www\.)[^\s]+", url)

if not extracted_url:
raise HTTPException(status.HTTP_400_BAD_REQUEST, {"details": ParserErrors.BAD_RECIPE_DATA.value})

new_recipe, extras = await scraper.scrape(extracted_url.group(0))

if not new_recipe:
raise HTTPException(status.HTTP_400_BAD_REQUEST, {"details": ParserErrors.BAD_RECIPE_DATA.value})
Expand Down
Loading