Did you have a mistake in your caddyfile? Or, what led to this? I’m using caddy as well and could be good to know, though I don’t recall seeing that warning.
Did you have a mistake in your caddyfile? Or, what led to this? I’m using caddy as well and could be good to know, though I don’t recall seeing that warning.
Here. I’m on docker 3.7. I think it’s correct…
services: elinorr: image: registry.gitlab.com/mwirth001/elinorr:latest container_name: elinorr environment: - PUID=1000 - PGID=1000 - TZ=America/Chicago - SHOW_LIST="design-squad" volumes: - '/mnt/user/appdata_docker/elinorr:/elinorr/config' - '/mnt/user/media/zz_downloads/elinorr_downloads:/elinorr/downloads' restart: unless-stopped
deleted by creator
Ty sir 🫡
So I just got back to this today and I still have the same error:
KeyError: 'collections' '2025-04-08 20:31:17,226 - INFO - Entering elinorr function' '2025-04-08 20:31:17,226 - INFO - Successfully set show_list.' '2025-04-08 20:31:17,226 - INFO - Successfully finished pre-flight checks, beginning main loop of Elinorr.' '2025-04-08 20:31:17,228 - INFO - Successfully connected to elinorr.db.' '2025-04-08 20:31:17,229 - INFO - Processing show "design-squad"' Traceback (most recent call last): File "/elinorr/elinorr.py", line 175, in <module> elinorr(SHOW_LIST, WORK_DIR, SCAN_INTERVAL) File "/elinorr/elinorr.py", line 136, in elinorr for episode in content_json['collections']['episodes']['content']: KeyError: 'collections' '2025-04-08 20:31:19,132 - INFO - Entering elinorr function' '2025-04-08 20:31:19,132 - INFO - Successfully set show_list.' '2025-04-08 20:31:19,133 - INFO - Successfully finished pre-flight checks, beginning main loop of Elinorr.' '2025-04-08 20:31:19,135 - INFO - Successfully connected to elinorr.db.' '2025-04-08 20:31:19,137 - INFO - Processing show "design-squad"' Traceback (most recent call last): File "/elinorr/elinorr.py", line 175, in <module> elinorr(SHOW_LIST, WORK_DIR, SCAN_INTERVAL) File "/elinorr/elinorr.py", line 136, in elinorr for episode in content_json['collections']['episodes']['content']: KeyError: 'collections'
Also tried with just "design-squad"
, and tried with "cyberchase design-squad"
Ah cool. Thanks for checking it out.
Ah. I just walked away from my computer for the day. I can check tomorrow.
So i got the container running, logs are showing the same error loop every couple of seconds. I’m wondering if it’s because the video url don’t match what’s in your gitlab readme:
https://pbskids.org/videos/watch/cyberchase-full-episodes/1385841/if-you-cant-stand-the-heat/1568637
I’m also interested 🫡
Maybe I’ll have to try again. https://www.reddit.com/r/Lidarr/comments/1bq1zot/comment/m6ecbr2/
This was my experience downloading with deemix and trying to import into lidarr, even with the two programs combined into one.
This looks interesting. Just a few thoughts (sorry if these dont belong in a recruitment post, feel free to ignore):
Radarr and sonarr already have a function to look at ones library and offer recommendations for new adds. How is your proposed functionality different?
Lidarr import of existing library or even grabs/downloads is atrocious. It’s damn near unusable, and a lot of people (myself included) don’t use it because it’s functionally useless if one has to manually import every song or album. I’m not sure it’s worth spending time building anything on top of Lidarr. Thoughts?
The Anna’s archive integration for readarr looks awesome, though. I look forward to trying that out.
A lot of people are only interested in downloading flac for music. Iirc, spotdl doesn’t do flac. Any plans to integrate for example lucida or, I’m not sure if deemix is still working? There have been several successful tools that built on top of deemix in the past, for example deemon, not sure how integration with those would look. Ideally one could add artists lidarr-style, a deezer arl, and the program does the rest (see lidarr-on-steroids).
If you’re using docker it’s easy to set up a second qbittorrent on a different port to meet different needs.
Sure, and let me know how it goes for you. I’m on a dell r720xd, about to upgrade my ram from 128 to 296 gb… don’t want to spend the money for a new gpu right now.
I’ll report back after I try again.
I tried minicpm-v, granite3.2-vision, and mistral.
Granite didn’t work with paperless-gpt at all. Mistral worked sometimes but also just kept running sometimes and didn’t finish within a reasonable time (15 minutes for 2 pages). minicpm-v finishes every time, but i just looked at some of the results and seems as though it’s not even worth keeping it running either. I suppose maybe the first one I tried that gave me a good impression was a fluke.
To be fair, I’m a noob at local ai, and I also don’t have a good gpu (gtx1650). So these failures could all be self induced. I like the idea of ai powered ocr so I’ll probably try again in the future…
I spun up ollama and paperless-gpt to add ai ocr sidecar to paperless-ngx. It’s okay. It can read handwritten stuff okayish, which is better than tesseract (doesnt read hand writing at all), so I throw handwritten stuff to it, but the difference on typed text is marginal in my single day I spent testing 3 different models on a few different typed receipts.
This looks really cool. Anyone know if there is a way to make something like this ingest a gedcom file and put out a visual like this?