Correcting spelling errors + adding more questions

This commit is contained in:
aviv
2023-12-27 19:14:00 +02:00
parent a51027b92e
commit d5ffbff131
10 changed files with 92 additions and 131 deletions

View File

@@ -4,9 +4,9 @@ sidebar_position: 4
# Albums
### Can I import assets to Immich while maintaining Existing Structure Albume?
### Can I import assets to Immich while maintaining existing album structure?
Yes you can by using Immich CLI with -a command, see [this document](http://localhost:3005/docs/features/command-line-interface) for more info.
Yes you can by using Immich CLI with -a command, see [this document](/docs/features/command-line-interface) for more info.
### there is a way to select the order in which photos are presented when an album is opened?
Not yet, but it's [something planned](https://github.com/immich-app/immich/discussions/1689)
### Is there a way to select the order in which photos are presented when an album is opened?
Not yet,it's planned to be [done at some point](https://github.com/immich-app/immich/discussions/1689).

View File

@@ -14,7 +14,7 @@ Template changes will only apply to new assets. To retroactively apply the templ
### Why are only photos and not videos being uploaded to Immich?
This often happens when using a reverse proxy or cloudflare tunnel in front of Immich. Make sure to set your reverse proxy to allow large POST requests. In `nginx`, set `client_max_body_size 50000M;` or similar. Also check the disk space of your reverse proxy, in some cases proxies caches requests to disk before passing them on, and if disk space runs out the request fails.
This often happens when using a reverse proxy or cloudflare tunnel in front of Immich. Make sure to set your reverse proxy to allow large POST requests. In `nginx`, set `client_max_body_size 50000M;` or similar. Also check the disk space of your reverse proxy, in some cases proxies cache requests to disk before passing them on, and if disk space runs out the request fails.
### In the uploads folder, why are photos stored in the wrong date?
@@ -24,25 +24,43 @@ This is fixed by running the storage migration job.
See [backup and restore](/docs/administration/backup-and-restore.md).
### Does Immich supports reading faces tags from the Exif?
### Does Immich supports reading faces tags from exif?
not at this moment
For now, it doesn't.
### Does Immich support filtering of images defined as NSFW?
### Does Immich support filtering of NSFW images?
As of now, this option is not implemented, but it seems that there is an [open discussion about it
On Github](https://github.com/immich-app/immich/discussions/2451) you can submit a pull request or vote for the discussion
It seems that, this option is not implemented, but it seems that there is an [open discussion about it
On Github](https://github.com/immich-app/immich/discussions/2451). You can submit a pull request or vote for the discussion.
### Why are there so many thumbnail generation jobs?
Immich generates three thumbnails for each asset (blurred, small, and large), as well as a thumbnail for each recognized face.
### What happens if an asset exists in more than one account?
All machine learning jobs and thumbnail images are recreated.
### Is it possible to use item compression like in App-Which-Must-Not-Be-Named?
No. There was a discussion about this in the past but it was [rejected](https://github.com/immich-app/immich/pull/1242), there is an [unofficial way to achieve this](https://gist.github.com/JamesCullum/6604e504318dd326a507108f59ca7dcd).
:::danger
If you do choose to use unofficial way, it's important to be aware of the following security risks:
- The golang:1.19.4-alpine3.17 base image was released a year ago and has [18 vulnerabilities for OpenSSL](https://hub.docker.com/layers/library/golang/1.19.4-alpine3.17/images/sha256-8b532e4f43b6ccab31b2542d132720aa6e22f6164e0ed9d4885ef2d7c8b87aa5?context=explore)
- The image `ghcr.io/jamescullum/multipart-upload-proxy:main` was last released long time ago and has vulnerable versions of OpenSSL and libwebp.
- The vips-dev package relies on libwebp, which had a major CVE a few months ago.
there may be other vulnerabilities as well.
:::
### How can I move all data (photos, persons, albums) from one user to another?
This requires some database queries. You can do this on the command line (in the PostgreSQL container using the psql command), or you can add for example an [Adminer](https://www.adminer.org/) container to the `docker-compose.yml` file, so that you can use a web-interface.
:::warning
This is an advanced operation. If you can't to do it with the steps described here, this is not for you.
This is an advanced operation. If you can't do it with the steps described here, this is not for you.
:::
1. **MAKE A BACKUP** - See [backup and restore](/docs/administration/backup-and-restore.md).

View File

@@ -31,32 +31,20 @@ To remove the **Metadata** you can stop Immich and delete the volume.
docker compose down -v
```
After removing the containers and volumes, the **Files** can be cleaned up (if necessary) from the `UPLOAD_LOCATION` by simply deleting an unwanted files or folders.
After removing the containers and volumes, the **Files** can be cleaned up (if necessary) from the `UPLOAD_LOCATION` by simply deleting any unwanted files or folders.
## Docker errors
### I am getting `Could not find image processor` ERROR class in ML containers what can i do?
### Why does the machine learning service report workers crashing?
:::note
This problem is irrelevant if Immich runs on a higher version from V1.91.5 and above
If the error says the worker is exiting, then this is normal. This is a feature intended to reduce RAM consumption when the service isn't being used.
:::
if you getting this logs:
There are a few reasons why this can happen.
<details>
<summary>Could not find image processor class in the image processor config or the model config.</summary>
If the error mentions SIGKILL or error code 137, it most likely means the service is running out of memory. Consider either increasing the server's RAM or moving the service to a server with more RAM.
```
2023-12-14 12:44:17 Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.
ERROR [JobService] Unable to run job handler (objectTagging/classify-image): TypeError: fetch failed
ERROR [JobService] TypeError: fetch failed
at Object.fetch (node:internal/deps/undici/undici:11730:11)
at async MachineLearningRepository.post (/usr/src/app/dist/infra/repositories/machine-learning.repository.js:16:21)
at async SmartInfoService.handleClassifyImage (/usr/src/app/dist/domain/smart-info/smart-info.service.js:55:22)
at async /usr/src/app/dist/domain/job/job.service.js:112:37
at async Worker.processJob (/usr/src/app/node_modules/bullmq/dist/cjs/classes/worker.js:387:28)
at async Worker.retryIfFailed (/usr/src/app/node_modules/bullmq/dist/cjs/classes/worker.js:574:24)
```
If it mentions SIGILL (note the lack of a K) or error code 132, it most likely means your server's CPU is incompatible. This is unlikely to occur on version 1.92.0 or later. Consider upgrading if your version of Immich is below that.
</details>
It means the model could not be exported to ONNX. I think this error happens is specific to ARM devices. You can disable Image Tagging in the admin settings to avoid this issue[[1]](https://discord.com/channels/979116623879368755/1184823648364798092/1184823806708166696)
If your version of Immich is below 1.92.0 and the crash occurs after logs about tracing or exporting a model, consider either upgrading or disabling the Tag Objects job.

View File

@@ -4,11 +4,17 @@ sidebar_position: 5
# External Library
### Can I add external library with existing structure albume?
### Can I add an external library while keeping the existing album structure?
We haven't put in an official mechanism to create album from your external library yet, but there are some [workarounds from the community](https://github.com/immich-app/immich/discussions/4279) which you can find here to help you achieve that.
### I got Duplicate files on Uploaded and External Library
It is important to remember that Immich does not filter duplicate files from [external Library](/docs/features/libraries),
Uploaded and External library use different dedup mechanism, upload library use file hash/checksum while external library use [file path](http://localhost:3005/docs/features/libraries#:~:text=In%20external%20libraries%2C%20the%20file%20path%20is%20used%20for%20duplicate%20detection.)
### I got duplicate files in the upload library and the external library
It is important to remember that Immich does not filter duplicate files from [external Library](/docs/features/libraries) at the moment.
Upload library and external library different mechanism to detect duplicates:
- Upload library: hashes the file
- External library: uses the file path
Therefore, a situation where the same file appears twice in the timeline is possible.

View File

@@ -3,11 +3,13 @@ sidebar_position: 6
---
# Machine Learning
### What is the technology that Immich use for machine learning?
### How does smart search work?
- For search function (smart search) - Immich uses Clip Models, for more information about Clip and its capabilities can be [read here](https://openai.com/research/clip) and [experience here](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=trending).
Immich uses CLIP models, for more information about CLIP and its capabilities can be [read here](https://openai.com/research/clip).
- For face detection and recognition, Immich uses [Insightface Model Zoo models](https://github.com/deepinsight/insightface/tree/master/model_zoo).
### How does facial recognition work?
For face detection and recognition, Immich uses [InsightFace models](https://github.com/deepinsight/insightface/tree/master/model_zoo).
### How can I disable machine learning?
@@ -21,105 +23,39 @@ However, disabling all jobs will not disable the machine learning service itself
### I'm getting errors about models being corrupt or failing to download. What do I do?
You can delete the model cache volume, which is where models are downloaded. This will give the service a clean environment to download the model again.
### What is the technology that Immich works on for machine learning?
* For image classification models listed [here](https://huggingface.co/models?pipeline_tag=image-classification&sort=trending). It must be tagged with the 'Image Classification' task and must support ONNX conversion
* For the search function ~ smart search Immich uses CLIP models. More information about CLIP and its capabilities can be read [here](https://openai.com/research/clip) and tried [here](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification)
* For facial recognition Immich uses each of the InsightFace Model Zoo models. You can look at the model differences in detail [here](https://github.com/deepinsight/insightface/blob/c2db41402c627cab8ea32d55da591940f2258276/model_zoo/README.md#insightface-model-zoo)
You can delete the model cache volume, which is where models are downloaded to. This will give the service a clean environment to download the model again.
### Why did Immich decide to remove object detection?
It was deprecated because it wasn't really used and caused crashes for some users.
users often confuse it with the more useful smart search feature.
The feature added keywords to images for metadata search, but wasn't used for smart search. Smart search made it unnecessary as it isn't limited to exact keywords. Combined with it causing crashes on some devices, using many dependencies and causing user confusion as to how search worked, it was better to remove the job altogether.
For more info see [here](https://github.com/immich-app/immich/pull/5903)
### What is the best order for ML to best work
Because Immich is relies on thumbnail generation
For machine learning [[1](http://localhost:3005/docs/developer/architecture#:~:text=For%20example%2C%20Smart%20Search%20and%20Facial%20Recognition%20relies%20on%20thumbnail%20generation)], it is recommended to wait for the thumbnail generation job to finish and then run the machine learning jobs (Smart Search and Facial Recognition)
For machine learning [[1](/docs/developer/architecture#:~:text=For%20example%2C%20Smart%20Search%20and%20Facial%20Recognition%20relies%20on%20thumbnail%20generation)], it is recommended to wait for the thumbnail generation job to finish and then run the machine learning jobs (Smart Search and Facial Recognition)
* Let all jobs run except the machine learning jobs and wait for them to finish
* Run Smart Search
* Run Facial Recognition
### What are the best ML models I can use for Immich
### Can I use a custom CLIP model?
:::info
It is important to know that these models greatly enhance the Immich ML, but you should know that they require more processing power and more storage and working memory (RAM) If you are using an old computer or Raspberry Pi for Immich you may want to look at [how to use machine learning remotely](/docs/guides/machine-learning) to run the machine learning on a higher performance computer.
:::
:::tip
You can check how much model processing power will be required by checking Flops(B) in [this table](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv)
The higher the value this indicates increased consumption of server resources.
:::
No, this is not supported. Only models listed in the [Huggingface](https://huggingface.co/immich-app) are compatible. Feel free to make a feature request if there's a model not listed here that you think should be added.
* For Facial recognition the best model will be `buffalo_l`.
### I want to be able to search in other languages besides English. How can I do that?
<details>
<summary>why is it?</summary>
from this [deepinsight/insightface#1820](https://github.com/deepinsight/insightface/issues/1820) it looks like the "r100 glint360k model" (which seems [antelopev2](https://github.com/deepinsight/insightface/blob/master/python-package/README.md)) performs worse than the `buffalo_l` model because the latter is trained on more data with a better training schedule. Seems like there is a reason that insightface does not promote the `antelopev2` model and prefers the `buffalo_l`[[1](https://github.com/deepinsight/insightface/issues/1820#issuecomment-968200625)].
Glint360k (of `antelopev2`) is 360k identities and 17M images.
WebFace600K(600K identities) is same as WebFace12M(12 million images), a subset of WebFace42M(cleaned WebFace260M)[[2](https://github.com/deepinsight/insightface/issues/1770#issuecomment-927131443)].
See the [full discussion](https://github.com/immich-app/immich/discussions/5028) in GitHub for more information.
</details>
* For the smart search feature the best Clip model will be `ViT-H-14-378-quickgelu` or `vit-H-14-Qickgelu`.
<details>
<summary>why is it?</summary>
You might want to consider `ViT-H-14-quickgelu` over `ViT-H-14-378-quickgeludfn5b`
because it takes much less processing power and the difference between the
Average perf. on 38 datasets is really "nothing"
to be precise a difference of 0.0118.
And the advantage of ViT-H-14-quickgelu is that it requires much less processing power
381.68 compared to 1054.05.
Sources:
[Discord](https://discord.com/channels/979116623879368755/1174254430221254738),
[openclip results](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv#L3),
[openclip retrieval results](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_retrieval_results.csv).
</details>
<details>
<summary>How to apply?</summary>
1. In the browser, log in to your Admin user
2. Go to the url [http://localhost:2283/admin/system-settings?open=job-settings](http://localhost:2283/admin/system-settings?open=job-settings)
3. Go to Machine Learning Settings
4. change the models and save
5. rerun the jobs of ML to take effect
</details>
You can change to a multilingual model listed [here](https://huggingface.co/collections/immich-app/multilingual-clip-654eb08c2382f591eeb8c2a7) by going to Administration > Machine Learning Settings > Smart Search and replacing the name of the model. Be sure to re-run Smart Search on all assets after this change. You can then search in over 100 languages.
:::note
Changing a model does not delete the previous model, pay attention to delete the old model to save space in drive.
Models are stored in IMMICH_MODEL-CACHE.
feel free to make a feature request if there's a model you want to use that isn't in Immich Huggingface list.
:::
### I would like to use the Clip model i created, how can i do that?
### Does Immich support Facial Recognition for videos ?
you can't, because we have a [whitelist](https://github.com/immich-app/immich/blob/main/machine-learning/app/models/constants.py) of models in the server code that has each models [dimension size](https://github.com/immich-app/immich/blob/main/server/src/domain/smart-info/smart-info.constant.ts). We use this to update the index to the right dim size when the model changes so custom models arent supported.
The [export script](https://github.com/immich-app/immich/tree/main/machine-learning/export) in Immich Github page is meant for us to be able to automate exporting and uploading to Huggingface you can send a [pull request](https://github.com/immich-app/immich/pulls) via Github to add this model to Immich check if Immich already have this model in the list of models in [Huggingface](https://huggingface.co/immich-app) before you send pull request.
([Sources](https://discord.com/channels/979116623879368755/1071165397228855327/1187088114599071855))
This is not currently implemented, but may be in the future.
### I don't speak English, is there a way I can use the Smart Serach in my language?
On the other hand, Immich uses a thumbnail image created from the video in order to scan it for a face, if there is a face in the thumbnail image of the video, Immich will try to recognize a face in thumbnail.
Yes, you can get such an option, to do this you need to check what is the best model according to your language, you can do it by checking the [table](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_multilingual_retrieval_results.csv).
:::note
If you found a suitable model for your language and you want to use it you may not be able to because Immich didn't add it to their list of models in [Huggingface](https://huggingface.co/immich-app), you can make it happen by send a [pull request](https://github.com/immich-app/immich/pulls) via Github to add this model to Immich.
It is important to check the level of accuracy of the model and the amount of use of processing power Flops(B) in [this table](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv) so you can know it will work well with your system.
:::
### Does Immich support Facial Recognition in video ?
Although it seems that insightface is able to [recognize faces in video](https://github.com/deepinsight/insightface#arcface-video-demo) as of now the option is not yet implemented in Immich.
On the other hand, Immich uses a thumbnail image created from the video in order to scan it for a face, if there is a face in the thumbnail image of the video, Immich will try to recognize a face in thumbnail.
### Does Immich have animal recognition?
No.

View File

@@ -4,13 +4,13 @@ sidebar_position: 2
# Mobile App
### What is the difference between the cloud icon on the mobile app?
### What is the difference between the cloud icons on the mobile app?
| Icon | Description |
| ---------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| ![cloud](/img/cloud.svg) | Asset is only available in the cloud and was uploaded from some other device (like the web client) or was deleted from this device after upload |
| ![cloud-cross](/img/cloud-off.svg) | Asset is only available locally and has not yet been backed up |
| ![cloud-done](/img/cloud-done.svg) | Asset was uploaded from this device and is now backed up in the cloud/server and still available in original on the device |
| ![cloud-done](/img/cloud-done.svg) | Asset was uploaded from this device and is now backed up to the server; the original file is still on the device |
### IOS app
@@ -22,3 +22,11 @@ there are some reporting issue with iPhoneXR regarding the problem with scanning
:::note
If you think you know what the solution to the problem might be, we'd be happy to discuss it on [Discord](https://discord.com/channels/979116623879368755/1071165397228855327)/[Github](https://github.com/immich-app/immich/discussions)
:::
### I can't log into the application after the server update, what can I do?
Make sure you've updated the app as well.
:::note
app store updates sometimes take longer because the stores (play store; Google and app store; Apple)
need to approve the update first which may take some time.
:::

View File

@@ -23,8 +23,8 @@ By default, a container has no resource constraints and can use as much of a giv
You can look at the [original docker docs](https://docs.docker.com/config/containers/resource_constraints/) or use this [guide](https://www.baeldung.com/ops/docker-memory-limit) to learn how to do this.
### Can I boost machine learning speed?
Yes, you can do it by increasing the amount of job concurrency, as the amount increases, the computer will work on several assets at the same time.
You can do it by
Yes, you can by increasing the job concurrency. With higher concurrency, the host will work on more assets in parallel.
You can do it by:
1. Admin user login
2. Administration
3. jobs
@@ -33,16 +33,12 @@ You can do it by
:::danger
On a normal machine, 2 or 3 concurrent jobs can probably max the CPU, so if you're not hitting those maximums with, say, 30 jobs,
Also, it is important to know that the Storage should have INPUT & OUTPUT at high speed in order to handle all of this
Note that storage speed and latency may quickly become the limiting factor; particularly when using HDDs.
For reference I never went above 32/job while testing with an RTX 4090 GPU
Or for example if machine learning is enabled by a CPU such as I7 8700 a bit higher than default here would be like 16/job.
Do not exaggerate with the amount of jobs because you're probably thoroughly overloading the database.
Do not exaggerate with the amount of jobs because you're probably thoroughly overloading the server.
more info [here](https://discord.com/channels/979116623879368755/994044917355663450/1174711719994605708)
:::
### When you run machine learning my processor is 100% usge is it noraml?
### Why is Immich using so much of my CPU?
When a large amount of assets are uploaded to Immich it makes sense that the CPU and RAM will be heavily used due to machine learning work and creating image thumbnails after that, the percentage of CPU usage will drop to around 3-5% usage

View File

@@ -1,9 +1,18 @@
---
sidebar_position: 2
---
# Setup
:::note
if there is something you're planning to work on just give us a heads up in [Discord](https://discord.com/channels/979116623879368755/1071165397228855327) of what it is so we can
1. Let you know if it's something we would accept into Immich
2. Provide any guidance on how something like that would ideally be implemented
3. Ensure nobody is already working on that issue/feature so we don't duplicate effort
Thanks for being interested in contributing 😊
:::
## Environment
### Server and web app

View File

@@ -5,7 +5,7 @@ in order to connect to the database the immich_postgres container **must be on**
:::
:::info
The password and name data comes from the `.env` file. If you have changed the values of the name and password, **they must be changed accordingly in the Connection screen** as explained below
Password and username match the ones specified in the `.env` file. If you have changed these values, **they must be changed accordingly in the Connection screen** as explained below
:::
- The software must be installed [pgAdmin](https://www.pgadmin.org/download)

View File

@@ -1,7 +1,7 @@
# Remote Access
This page gives a few pointers on how to access your Immich instance from outside your LAN.
You can read the [full discussion in discount](https://discord.com/channels/979116623879368755/1122615710846308484)
You can read the [full discussion in Discord](https://discord.com/channels/979116623879368755/1122615710846308484)
:::danger
Never forward port 2283 directly to the internet without additional configuration. This will expose the web interface via http to the internet, making you succeptible to [man in the middle](https://en.wikipedia.org/wiki/Man-in-the-middle_attack) attacks.