Compare commits
13 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b258f3552a | ||
|
|
e803bc909f | ||
|
|
d078aea32b | ||
|
|
fb2cfcb640 | ||
|
|
99f85fb359 | ||
|
|
454fb106d2 | ||
|
|
7d078a2f0e | ||
|
|
b015648bfe | ||
|
|
a58482cb2b | ||
|
|
9dd1d81536 | ||
|
|
a2f5674bbb | ||
|
|
837ad24f58 | ||
|
|
058c62b111 |
3
.github/workflows/prepare-release.yml
vendored
3
.github/workflows/prepare-release.yml
vendored
@@ -34,6 +34,9 @@ jobs:
|
||||
with:
|
||||
token: ${{ secrets.ORG_RELEASE_TOKEN }}
|
||||
|
||||
- name: Install Poetry
|
||||
run: pipx install poetry
|
||||
|
||||
- name: Bump version
|
||||
run: misc/release/pump-version.sh -s "${{ inputs.serverBump }}" -m "${{ inputs.mobileBump }}"
|
||||
|
||||
|
||||
@@ -19,6 +19,7 @@
|
||||
<br/>
|
||||
<p align="center">
|
||||
<a href="README_zh_CN.md">中文</a>
|
||||
<a href="README_tr_TR.md">Türkçe</a>
|
||||
</p>
|
||||
|
||||
## Disclaimer
|
||||
|
||||
104
README_tr_TR.md
Normal file
104
README_tr_TR.md
Normal file
@@ -0,0 +1,104 @@
|
||||
<p align="center">
|
||||
<br/>
|
||||
<a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/license-MIT-green.svg?color=3F51B5&style=for-the-badge&label=License&logoColor=000000&labelColor=ececec" alt="License: MIT"></a>
|
||||
<a href="https://discord.gg/D8JsnBEuKb">
|
||||
<img src="https://img.shields.io/discord/979116623879368755.svg?label=Discord&logo=Discord&style=for-the-badge&logoColor=000000&labelColor=ececec" atl="Discord"/>
|
||||
</a>
|
||||
<br/>
|
||||
<br/>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<img src="design/immich-logo.svg" width="150" title="Login With Custom URL">
|
||||
</p>
|
||||
<h3 align="center">Immich - Yüksek performanslı, kendine ait barındırılan fotoğraf ve video yedekleme çözümü</h3>
|
||||
<br/>
|
||||
<a href="https://immich.app">
|
||||
<img src="design/immich-screenshots.png" title="Main Screenshot">
|
||||
</a>
|
||||
<br/>
|
||||
<p align="center">
|
||||
<a href="README.md">English</a>
|
||||
<a href="README_zh_CN.md">中文</a>
|
||||
</p>
|
||||
|
||||
## Feragatname
|
||||
|
||||
- ⚠️ Proje **çok aktif** bir şekilde geliştirilmektedir.
|
||||
- ⚠️ Hatalar ve uygulama yapısını bozan değişiklikler olabilir.
|
||||
- ⚠️ **Uygulamayı, fotoğraflarınızı ve videolarınızı saklamanın tek yöntemi olarak kullanmayın!**
|
||||
|
||||
## Content
|
||||
|
||||
- [Resmi Belgeler](https://immich.app/docs)
|
||||
- [Yol Haritası](https://github.com/orgs/immich-app/projects/1)
|
||||
- [Demo](#demo)
|
||||
- [Özellikler](#özellikler)
|
||||
- [Giriş](https://immich.app/docs/overview/introduction)
|
||||
- [Kurulum](https://immich.app/docs/install/requirements)
|
||||
- [Katkı Sağlama Rehberi](https://immich.app/docs/overview/support-the-project)
|
||||
- [Projeyi Destekle](#projeyi-destekle)
|
||||
|
||||
## Belgeler
|
||||
|
||||
Kurulum dahil olmak üzere resmi belgeleri https://immich.app/ adresinde bulabilirsiniz.
|
||||
|
||||
## Demo
|
||||
|
||||
Web demo adresi: https://demo.immich.app
|
||||
|
||||
Mobil uygulama için `Server Endpoint URL` olarak `https://demo.immich.app/api` adresini kullanabilirsiniz.
|
||||
|
||||
```bash title="Demo Bilgileri"
|
||||
Giriş bilgileri:
|
||||
email: demo@immich.app
|
||||
password: demo
|
||||
```
|
||||
|
||||
```
|
||||
Server Özellikleri: Free-tier Oracle VM - Amsterdam - 2.4Ghz quad-core ARM64 CPU, 24GB RAM
|
||||
```
|
||||
|
||||
# Özellikler
|
||||
|
||||
| Özellikler | Mobile | Web |
|
||||
| ----------------------------------------------------| ------ | --- |
|
||||
| Videoları ve fotoğrafları yükleme ve görüntüleme | Evet | Evet |
|
||||
| Uygulama açıldığında otomatik yedekleme | Evet | N/A |
|
||||
| Yedekleme için seçilebilir albüm(ler) | Evet | N/A |
|
||||
| Fotoğrafları ve videoları yerel cihaza yükleme | Evet | Evet |
|
||||
| Çoklu kullanıcı desteği | Evet | Evet |
|
||||
| Albüm ve paylaşılan albümler | Evet | Evet |
|
||||
| Silinebilir/sürüklenebilir kaydırma çubuğu | Evet | Evet |
|
||||
| RAW (HEIC, HEIF, DNG, Apple ProRaw) format desteği | Evet | Evet |
|
||||
| Metadata'ya uygun görüntüleme (EXIF, map) | Evet | Evet |
|
||||
| Metadata, objects, faces ve CLIP'e göre arama | Evet | Evet |
|
||||
| Yönetimsel işlevler (kullanıcı yönetimi) | Hayır | Evet |
|
||||
| Arka planda yedekleme | Evet | N/A |
|
||||
| Sanal kaydırma | Evet | Evet |
|
||||
| OAuth desteği | Evet | Evet |
|
||||
| API anahtarları | N/A | Evet |
|
||||
| LivePhoto yedekleme ve oynatma | iOS | Evet |
|
||||
| Kullanıcı tanımlı depolama yapısı | Evet | Evet |
|
||||
| Herkese açık paylaşım | Hayır | Evet |
|
||||
| Arşiv ve Favoriler | Evet | Evet |
|
||||
| Dünya haritası | Hayır | Evet |
|
||||
| Partner paylaşımı | Evet | Evet |
|
||||
| Yüz tanıma ve kümeleme | Hayır | Evet |
|
||||
| Çevrimdışı destek | Evet | Hayır|
|
||||
|
||||
# Projeyi Destekle
|
||||
|
||||
Bu projeye bağlı kaldım ve durmayacağım. Belgeleri güncellemeye, yeni özellikler eklemeye ve hataları düzeltmeye devam edeceğim. Ancak bunu tek başıma yapamam. Bu yüzden devam etme konusunda bana motivasyon sağlamanız için yardımınıza ihtiyacım var.
|
||||
|
||||
[selfhosted.show - In the episode 'The-organization-must-not-be-name is a Hostile Actor'](https://selfhosted.show/79?t=1418) bölümünde söylendiği üzere,bu projede takımımın ve benim projeye harcadağımız büyük bir çaba var. Bir gün bunu tam zamanlı olarak yapabilmeyi çok isterim. Bunu gerçekleştirebilmek için gerçekten sizlerin desteğine ihtiyacım var.
|
||||
|
||||
Eğer bu size doğru bir amaç gibi geliyorsa ve uygulamanın uzun bir süre boyunca kullanacağınız bir şey olduğunu düşünüyorsanız, aşağıdaki bağlantılardan birini kullanarak bana destek olabilirsiniz.
|
||||
|
||||
## Bağış
|
||||
|
||||
- [Aylık bağış](https://github.com/sponsors/alextran1502) via GitHub Sponsors
|
||||
- [Bir seferlik bağış](https://github.com/sponsors/alextran1502?frequency=one-time&sponsor=alextran1502) via GitHub Sponsors
|
||||
- [Librepay](https://liberapay.com/alex.tran1502/)
|
||||
- [buymeacoffee](https://www.buymeacoffee.com/altran1502)
|
||||
- Bitcoin: 1FvEp6P6NM8EZEkpGUFAN2LqJ1gxusNxZX
|
||||
@@ -23,6 +23,7 @@
|
||||
|
||||
<p align="center">
|
||||
<a href="README.md">English</a>
|
||||
<a href="README_tr_TR.md">Türkçe</a>
|
||||
</p>
|
||||
|
||||
|
||||
|
||||
105
docs/blog/2023/06-24/update.mdx
Normal file
105
docs/blog/2023/06-24/update.mdx
Normal file
@@ -0,0 +1,105 @@
|
||||
---
|
||||
title: June 2023 update
|
||||
authors: [alextran]
|
||||
tags: [update]
|
||||
---
|
||||
|
||||
Hello everybody, Alex here!
|
||||
|
||||
I am back with another update on Immich. It has been only a month since my last update (May 18th, 2023), but it seems forever. I think the rapid releases of Immich and the amount of work make the perspective of time change in Immich’s world. We have some exciting updates that I think you will like.
|
||||
|
||||
Before going into detail, on behalf of the core team, I would like to thank all of you for loving Immich and contributing to the project. Thank you for helping me make Immich an enjoyable alternative solution to Google Photos so that you have complete control of your data and privacy. I know we are still young and have a lot of work to do, but I am confident we will get there with help from the community. I appreciate all of you from the bottom of my heart!
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
And now, to the exciting part, what is new in Immich’s world?
|
||||
|
||||
- Initial support for existing gallery.
|
||||
- Memory feature.
|
||||
- Support XMP sidecar.
|
||||
- Support more raw formats.
|
||||
- Justified layout for web timeline and blurred thumbnail hash.
|
||||
- Mechanism to host machine learning on a completely different machine.
|
||||
|
||||
## Support for existing gallery
|
||||
|
||||
I know this is the most controversial feature when it comes to Immich’s way of ingesting photos and videos. For many users, having to upload photos and videos to Immich is simply not working. We listen, discuss, and digest this feature internally more than you imagine because it is not a simple feature to tackle while keeping the performance and the user experience at the top level, which is Immich’s primary goal.
|
||||
|
||||
Thankfully, we have many great contributors and developers that want to make this come true. So we came up with an initial implementation of this feature in the form of a supporting read-only gallery.
|
||||
|
||||
To be concise, Immich can now read in the gallery files, register the path into the database, and then generate necessary files and put them through Immich’s machine learning pipeline so you can use all the goodness of Immich without the need to upload them. Since this is the initial implementation, some actions/behavior are not yet supported, and we aim to build toward them in future releases, namely:
|
||||
|
||||
- Assets are not automatically synced and must instead be manually synced with the CLI tool.
|
||||
- Only new files that are added to the gallery will be detected.
|
||||
- Deleted and moved files will not be detected.
|
||||
|
||||
You can find more information on how to use the feature by reading the documentation [here](/docs/features/read-only-gallery).
|
||||
|
||||
## Memory feature
|
||||
|
||||
This is considered a fun feature that the team and I wanted to build for so long, but we had to put it off because of the refactoring of the code base. The code base is now in a good enough form to circle back and add more exciting features.
|
||||
|
||||
This memory feature is very much similar to GPhotos' implementation of “x years since…”. We are aiming to add more categories of memories in the future, such as “Spotlight of the day” or “Day of the Week highlights”
|
||||
|
||||
<iframe
|
||||
width="560"
|
||||
height="315"
|
||||
src="https://www.youtube.com/embed/j5XZKvViPew"
|
||||
title="YouTube video player"
|
||||
frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
allowfullscreen
|
||||
></iframe>
|
||||
|
||||
This feature is now available on the web and will be ported to the mobile app in the near future.
|
||||
|
||||
## Support XMP Sidecar
|
||||
|
||||
Immich can now import/upload XMP sidecars from the CLI and use the information as the metadata of assets.
|
||||
|
||||
## Support more raw formats.
|
||||
|
||||
With the recent updates on the dependencies of Immich, we are now extending and hardening support for multiple raw formats. So users with DSLR or mirrorless cameras can now upload their original files to Immich and have them displayed in high-quality thumbnails on the web and mobile view.
|
||||
|
||||
## Justified layout for web timeline and blurred thumbnail hash
|
||||
|
||||
This is an aesthetic improvement in user experience when browsing the timeline. Photos and videos are now displayed correctly with perspective orientation, making the browsing experience more pleasurable.
|
||||
|
||||
To further improve the browsing experience, we now added a blur hash to the thumbnail, so the transition is more natural with a dreamy fade in effect, similar to how our brain goes from faded to vivid memory
|
||||
|
||||
<iframe
|
||||
width="560"
|
||||
height="315"
|
||||
src="https://www.youtube.com/embed/b95FLmGHRFc"
|
||||
title="YouTube video player"
|
||||
frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
allowfullscreen
|
||||
></iframe>
|
||||
|
||||
## Hosting machine learning container on a different machine
|
||||
|
||||
With more capabilities Immich is building toward, machine learning will get more powerful and therefore require more resources to run effectively. However, we understand that users might not have the best server resources where they host the Immich instance. Therefore, we changed how machine learning interacts and receives the photos and videos to run through its inference pipeline.
|
||||
|
||||
The machine learning container is now a headless system that can run on any machine. As long as your Immich instance can communicate with the system running the machine learning container, it can send the files and receive the required information to make Immich powerful in terms of searching and intelligence. This helps you to utilize a more powerful machine in your home/infrastructure to perform the CPU-intensive tasks while letting Immich only handle the I/O operations for a pleasant and smooth experience.
|
||||
|
||||
---
|
||||
|
||||
So, those are the highlights for the team and the community after a busy month. There are a lot more changes and improvements. I encourage you to read some release notes, starting from version [v1.57.0](https://github.com/immich-app/immich/releases/tag/v1.57.0) to now.
|
||||
|
||||
Thank you, and I am asking for your support for the project. I hope to be a full-time maintainer of Immich one day to dedicate myself to the project as my life works for the community and my family. You can find the support channels below:
|
||||
|
||||
- Monthly donation via [GitHub Sponsors](https://github.com/sponsors/alextran1502)
|
||||
- One-time donation via [GitHub Sponsors](https://github.com/sponsors/alextran1502?frequency=one-time&sponsor=alextran1502)
|
||||
- [Liberapay](https://liberapay.com/alex.tran1502/)
|
||||
- [buymeacoffee](https://www.buymeacoffee.com/altran1502)
|
||||
- Bitcoin: 1FvEp6P6NM8EZEkpGUFAN2LqJ1gxusNxZX
|
||||
- Give a project a star - the contributors love gazing at the stars and seeing their creations shining in the sky.
|
||||
|
||||
Join our friendly [Discord](https://discord.gg/D8JsnBEuKb) to talk and discuss Immich, tech, or anything
|
||||
|
||||
Cheer!
|
||||
|
||||
Until next time!
|
||||
|
||||
Alex
|
||||
@@ -105,6 +105,11 @@ const config = {
|
||||
position: 'right',
|
||||
label: 'API',
|
||||
},
|
||||
{
|
||||
to: '/blog',
|
||||
position: 'right',
|
||||
label: 'Blog',
|
||||
},
|
||||
{
|
||||
href: 'https://github.com/immich-app/immich',
|
||||
label: 'GitHub',
|
||||
|
||||
@@ -21,8 +21,8 @@ ENV NODE_ENV=production \
|
||||
PYTHONDONTWRITEBYTECODE=1 \
|
||||
PYTHONUNBUFFERED=1 \
|
||||
PATH="/opt/venv/bin:$PATH" \
|
||||
PYTHONPATH=`pwd`
|
||||
PYTHONPATH=/usr/src
|
||||
|
||||
COPY --from=builder /opt/venv /opt/venv
|
||||
COPY app .
|
||||
ENTRYPOINT ["python", "main.py"]
|
||||
ENTRYPOINT ["python", "-m", "app.main"]
|
||||
|
||||
@@ -11,3 +11,12 @@ Running `poetry install --no-root --with dev` will install everything you need i
|
||||
|
||||
To add or remove dependencies, you can use the commands `poetry add $PACKAGE_NAME` and `poetry remove $PACKAGE_NAME`, respectively.
|
||||
Be sure to commit the `poetry.lock` and `pyproject.toml` files to reflect any changes in dependencies.
|
||||
|
||||
|
||||
# Load Testing
|
||||
|
||||
To measure inference throughput and latency, you can use [Locust](https://locust.io/) using the provided `locustfile.py`.
|
||||
Locust works by querying the model endpoints and aggregating their statistics, meaning the app must be deployed.
|
||||
You can run `load_test.sh` to automatically deploy the app locally and start Locust, optionally adjusting its env variables as needed.
|
||||
|
||||
Alternatively, for more custom testing, you may also run `locust` directly: see the [documentation](https://docs.locust.io/en/stable/index.html). Note that in Locust's jargon, concurrency is measured in `users`, and each user runs one task at a time. To achieve a particular per-endpoint concurrency, multiply that number by the number of endpoints to be queried. For example, if there are 3 endpoints and you want each of them to receive 8 requests at a time, you should set the number of users to 24.
|
||||
0
machine-learning/app/__init__.py
Normal file
0
machine-learning/app/__init__.py
Normal file
@@ -1,5 +1,10 @@
|
||||
from pathlib import Path
|
||||
|
||||
from pydantic import BaseSettings
|
||||
|
||||
from .schemas import ModelType
|
||||
|
||||
|
||||
class Settings(BaseSettings):
|
||||
cache_folder: str = "/cache"
|
||||
classification_model: str = "microsoft/resnet-50"
|
||||
@@ -15,8 +20,12 @@ class Settings(BaseSettings):
|
||||
min_face_score: float = 0.7
|
||||
|
||||
class Config(BaseSettings.Config):
|
||||
env_prefix = 'MACHINE_LEARNING_'
|
||||
env_prefix = "MACHINE_LEARNING_"
|
||||
case_sensitive = False
|
||||
|
||||
|
||||
def get_cache_dir(model_name: str, model_type: ModelType) -> Path:
|
||||
return Path(settings.cache_folder, model_type.value, model_name)
|
||||
|
||||
|
||||
settings = Settings()
|
||||
|
||||
@@ -1,52 +1,58 @@
|
||||
import os
|
||||
import io
|
||||
from io import BytesIO
|
||||
from typing import Any
|
||||
|
||||
from cache import ModelCache
|
||||
from schemas import (
|
||||
import cv2
|
||||
import numpy as np
|
||||
import uvicorn
|
||||
from fastapi import Body, Depends, FastAPI
|
||||
from PIL import Image
|
||||
|
||||
from .config import settings
|
||||
from .models.base import InferenceModel
|
||||
from .models.cache import ModelCache
|
||||
from .schemas import (
|
||||
EmbeddingResponse,
|
||||
FaceResponse,
|
||||
TagResponse,
|
||||
MessageResponse,
|
||||
ModelType,
|
||||
TagResponse,
|
||||
TextModelRequest,
|
||||
TextResponse,
|
||||
)
|
||||
import uvicorn
|
||||
from PIL import Image
|
||||
from fastapi import FastAPI, HTTPException, Depends, Body
|
||||
from models import get_model, run_classification, run_facial_recognition
|
||||
from config import settings
|
||||
|
||||
_model_cache = None
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
|
||||
@app.on_event("startup")
|
||||
async def startup_event() -> None:
|
||||
global _model_cache
|
||||
_model_cache = ModelCache(ttl=settings.model_ttl, revalidate=True)
|
||||
app.state.model_cache = ModelCache(ttl=settings.model_ttl, revalidate=True)
|
||||
same_clip = settings.clip_image_model == settings.clip_text_model
|
||||
app.state.clip_vision_type = ModelType.CLIP if same_clip else ModelType.CLIP_VISION
|
||||
app.state.clip_text_type = ModelType.CLIP if same_clip else ModelType.CLIP_TEXT
|
||||
models = [
|
||||
(settings.classification_model, "image-classification"),
|
||||
(settings.clip_image_model, "clip"),
|
||||
(settings.clip_text_model, "clip"),
|
||||
(settings.facial_recognition_model, "facial-recognition"),
|
||||
(settings.classification_model, ModelType.IMAGE_CLASSIFICATION),
|
||||
(settings.clip_image_model, app.state.clip_vision_type),
|
||||
(settings.clip_text_model, app.state.clip_text_type),
|
||||
(settings.facial_recognition_model, ModelType.FACIAL_RECOGNITION),
|
||||
]
|
||||
|
||||
# Get all models
|
||||
for model_name, model_type in models:
|
||||
if settings.eager_startup:
|
||||
await _model_cache.get_cached_model(model_name, model_type)
|
||||
await app.state.model_cache.get(model_name, model_type)
|
||||
else:
|
||||
get_model(model_name, model_type)
|
||||
InferenceModel.from_model_type(model_type, model_name)
|
||||
|
||||
|
||||
def dep_model_cache():
|
||||
if _model_cache is None:
|
||||
raise HTTPException(status_code=500, detail="Unable to load model.")
|
||||
def dep_pil_image(byte_image: bytes = Body(...)) -> Image.Image:
|
||||
return Image.open(BytesIO(byte_image))
|
||||
|
||||
|
||||
def dep_cv_image(byte_image: bytes = Body(...)) -> cv2.Mat:
|
||||
byte_image_np = np.frombuffer(byte_image, np.uint8)
|
||||
return cv2.imdecode(byte_image_np, cv2.IMREAD_COLOR)
|
||||
|
||||
def dep_input_image(image: bytes = Body(...)) -> Image:
|
||||
return Image.open(io.BytesIO(image))
|
||||
|
||||
@app.get("/", response_model=MessageResponse)
|
||||
async def root() -> dict[str, str]:
|
||||
@@ -62,33 +68,29 @@ def ping() -> str:
|
||||
"/image-classifier/tag-image",
|
||||
response_model=TagResponse,
|
||||
status_code=200,
|
||||
dependencies=[Depends(dep_model_cache)],
|
||||
)
|
||||
async def image_classification(
|
||||
image: Image = Depends(dep_input_image)
|
||||
image: Image.Image = Depends(dep_pil_image),
|
||||
) -> list[str]:
|
||||
try:
|
||||
model = await _model_cache.get_cached_model(
|
||||
settings.classification_model, "image-classification"
|
||||
)
|
||||
labels = run_classification(model, image, settings.min_tag_score)
|
||||
except Exception as ex:
|
||||
raise HTTPException(status_code=500, detail=str(ex))
|
||||
else:
|
||||
return labels
|
||||
model = await app.state.model_cache.get(
|
||||
settings.classification_model, ModelType.IMAGE_CLASSIFICATION
|
||||
)
|
||||
labels = model.predict(image)
|
||||
return labels
|
||||
|
||||
|
||||
@app.post(
|
||||
"/sentence-transformer/encode-image",
|
||||
response_model=EmbeddingResponse,
|
||||
status_code=200,
|
||||
dependencies=[Depends(dep_model_cache)],
|
||||
)
|
||||
async def clip_encode_image(
|
||||
image: Image = Depends(dep_input_image)
|
||||
image: Image.Image = Depends(dep_pil_image),
|
||||
) -> list[float]:
|
||||
model = await _model_cache.get_cached_model(settings.clip_image_model, "clip")
|
||||
embedding = model.encode(image).tolist()
|
||||
model = await app.state.model_cache.get(
|
||||
settings.clip_image_model, app.state.clip_vision_type
|
||||
)
|
||||
embedding = model.predict(image)
|
||||
return embedding
|
||||
|
||||
|
||||
@@ -96,13 +98,12 @@ async def clip_encode_image(
|
||||
"/sentence-transformer/encode-text",
|
||||
response_model=EmbeddingResponse,
|
||||
status_code=200,
|
||||
dependencies=[Depends(dep_model_cache)],
|
||||
)
|
||||
async def clip_encode_text(
|
||||
payload: TextModelRequest
|
||||
) -> list[float]:
|
||||
model = await _model_cache.get_cached_model(settings.clip_text_model, "clip")
|
||||
embedding = model.encode(payload.text).tolist()
|
||||
async def clip_encode_text(payload: TextModelRequest) -> list[float]:
|
||||
model = await app.state.model_cache.get(
|
||||
settings.clip_text_model, app.state.clip_text_type
|
||||
)
|
||||
embedding = model.predict(payload.text)
|
||||
return embedding
|
||||
|
||||
|
||||
@@ -110,22 +111,21 @@ async def clip_encode_text(
|
||||
"/facial-recognition/detect-faces",
|
||||
response_model=FaceResponse,
|
||||
status_code=200,
|
||||
dependencies=[Depends(dep_model_cache)],
|
||||
)
|
||||
async def facial_recognition(
|
||||
image: bytes = Body(...),
|
||||
image: cv2.Mat = Depends(dep_cv_image),
|
||||
) -> list[dict[str, Any]]:
|
||||
model = await _model_cache.get_cached_model(
|
||||
settings.facial_recognition_model, "facial-recognition"
|
||||
model = await app.state.model_cache.get(
|
||||
settings.facial_recognition_model, ModelType.FACIAL_RECOGNITION
|
||||
)
|
||||
faces = run_facial_recognition(model, image)
|
||||
faces = model.predict(image)
|
||||
return faces
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
is_dev = os.getenv("NODE_ENV") == "development"
|
||||
uvicorn.run(
|
||||
"main:app",
|
||||
"app.main:app",
|
||||
host=settings.host,
|
||||
port=settings.port,
|
||||
reload=is_dev,
|
||||
|
||||
@@ -1,119 +0,0 @@
|
||||
import torch
|
||||
from insightface.app import FaceAnalysis
|
||||
from pathlib import Path
|
||||
|
||||
from transformers import pipeline, Pipeline
|
||||
from sentence_transformers import SentenceTransformer
|
||||
from typing import Any, BinaryIO
|
||||
import cv2 as cv
|
||||
import numpy as np
|
||||
from PIL import Image
|
||||
from config import settings
|
||||
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
|
||||
|
||||
def get_model(model_name: str, model_type: str, **model_kwargs):
|
||||
"""
|
||||
Instantiates the specified model.
|
||||
|
||||
Args:
|
||||
model_name: Name of model in the model hub used for the task.
|
||||
model_type: Model type or task, which determines which model zoo is used.
|
||||
`facial-recognition` uses Insightface, while all other models use the HF Model Hub.
|
||||
|
||||
Options:
|
||||
`image-classification`, `clip`,`facial-recognition`, `tokenizer`, `processor`
|
||||
|
||||
Returns:
|
||||
model: The requested model.
|
||||
"""
|
||||
|
||||
cache_dir = _get_cache_dir(model_name, model_type)
|
||||
match model_type:
|
||||
case "facial-recognition":
|
||||
model = _load_facial_recognition(
|
||||
model_name, cache_dir=cache_dir, **model_kwargs
|
||||
)
|
||||
case "clip":
|
||||
model = SentenceTransformer(
|
||||
model_name, cache_folder=cache_dir, **model_kwargs
|
||||
)
|
||||
case _:
|
||||
model = pipeline(
|
||||
model_type,
|
||||
model_name,
|
||||
model_kwargs={"cache_dir": cache_dir, **model_kwargs},
|
||||
)
|
||||
|
||||
return model
|
||||
|
||||
|
||||
def run_classification(
|
||||
model: Pipeline, image: Image, min_score: float | None = None
|
||||
):
|
||||
predictions: list[dict[str, Any]] = model(image) # type: ignore
|
||||
result = {
|
||||
tag
|
||||
for pred in predictions
|
||||
for tag in pred["label"].split(", ")
|
||||
if min_score is None or pred["score"] >= min_score
|
||||
}
|
||||
|
||||
return list(result)
|
||||
|
||||
|
||||
def run_facial_recognition(
|
||||
model: FaceAnalysis, image: bytes
|
||||
) -> list[dict[str, Any]]:
|
||||
file_bytes = np.frombuffer(image, dtype=np.uint8)
|
||||
img = cv.imdecode(file_bytes, cv.IMREAD_COLOR)
|
||||
height, width, _ = img.shape
|
||||
results = []
|
||||
faces = model.get(img)
|
||||
|
||||
for face in faces:
|
||||
x1, y1, x2, y2 = face.bbox
|
||||
|
||||
results.append(
|
||||
{
|
||||
"imageWidth": width,
|
||||
"imageHeight": height,
|
||||
"boundingBox": {
|
||||
"x1": round(x1),
|
||||
"y1": round(y1),
|
||||
"x2": round(x2),
|
||||
"y2": round(y2),
|
||||
},
|
||||
"score": face.det_score.item(),
|
||||
"embedding": face.normed_embedding.tolist(),
|
||||
}
|
||||
)
|
||||
return results
|
||||
|
||||
|
||||
def _load_facial_recognition(
|
||||
model_name: str,
|
||||
min_face_score: float | None = None,
|
||||
cache_dir: Path | str | None = None,
|
||||
**model_kwargs,
|
||||
):
|
||||
if cache_dir is None:
|
||||
cache_dir = _get_cache_dir(model_name, "facial-recognition")
|
||||
if isinstance(cache_dir, Path):
|
||||
cache_dir = cache_dir.as_posix()
|
||||
if min_face_score is None:
|
||||
min_face_score = settings.min_face_score
|
||||
|
||||
model = FaceAnalysis(
|
||||
name=model_name,
|
||||
root=cache_dir,
|
||||
allowed_modules=["detection", "recognition"],
|
||||
**model_kwargs,
|
||||
)
|
||||
model.prepare(ctx_id=0, det_thresh=min_face_score, det_size=(640, 640))
|
||||
return model
|
||||
|
||||
|
||||
def _get_cache_dir(model_name: str, model_type: str) -> Path:
|
||||
return Path(settings.cache_folder, device, model_type, model_name)
|
||||
3
machine-learning/app/models/__init__.py
Normal file
3
machine-learning/app/models/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
from .clip import CLIPSTTextEncoder, CLIPSTVisionEncoder
|
||||
from .facial_recognition import FaceRecognizer
|
||||
from .image_classification import ImageClassifier
|
||||
52
machine-learning/app/models/base.py
Normal file
52
machine-learning/app/models/base.py
Normal file
@@ -0,0 +1,52 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import abstractmethod, ABC
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from ..config import get_cache_dir
|
||||
from ..schemas import ModelType
|
||||
|
||||
|
||||
class InferenceModel(ABC):
|
||||
_model_type: ModelType
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model_name: str,
|
||||
cache_dir: Path | None = None,
|
||||
):
|
||||
self.model_name = model_name
|
||||
self._cache_dir = (
|
||||
cache_dir
|
||||
if cache_dir is not None
|
||||
else get_cache_dir(model_name, self.model_type)
|
||||
)
|
||||
|
||||
@abstractmethod
|
||||
def predict(self, inputs: Any) -> Any:
|
||||
...
|
||||
|
||||
@property
|
||||
def model_type(self) -> ModelType:
|
||||
return self._model_type
|
||||
|
||||
@property
|
||||
def cache_dir(self) -> Path:
|
||||
return self._cache_dir
|
||||
|
||||
@cache_dir.setter
|
||||
def cache_dir(self, cache_dir: Path):
|
||||
self._cache_dir = cache_dir
|
||||
|
||||
@classmethod
|
||||
def from_model_type(
|
||||
cls, model_type: ModelType, model_name, **model_kwargs
|
||||
) -> InferenceModel:
|
||||
subclasses = {
|
||||
subclass._model_type: subclass for subclass in cls.__subclasses__()
|
||||
}
|
||||
if model_type not in subclasses:
|
||||
raise ValueError(f"Unsupported model type: {model_type}")
|
||||
|
||||
return subclasses[model_type](model_name, **model_kwargs)
|
||||
@@ -1,8 +1,11 @@
|
||||
from aiocache.plugins import TimingPlugin, BasePlugin
|
||||
import asyncio
|
||||
|
||||
from aiocache.backends.memory import SimpleMemoryCache
|
||||
from aiocache.lock import OptimisticLock
|
||||
from typing import Any
|
||||
from models import get_model
|
||||
from aiocache.plugins import BasePlugin, TimingPlugin
|
||||
|
||||
from ..schemas import ModelType
|
||||
from .base import InferenceModel
|
||||
|
||||
|
||||
class ModelCache:
|
||||
@@ -10,7 +13,7 @@ class ModelCache:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
ttl: int | None = None,
|
||||
ttl: float | None = None,
|
||||
revalidate: bool = False,
|
||||
timeout: int | None = None,
|
||||
profiling: bool = False,
|
||||
@@ -35,9 +38,9 @@ class ModelCache:
|
||||
ttl=ttl, timeout=timeout, plugins=plugins, namespace=None
|
||||
)
|
||||
|
||||
async def get_cached_model(
|
||||
self, model_name: str, model_type: str, **model_kwargs
|
||||
) -> Any:
|
||||
async def get(
|
||||
self, model_name: str, model_type: ModelType, **model_kwargs
|
||||
) -> InferenceModel:
|
||||
"""
|
||||
Args:
|
||||
model_name: Name of model in the model hub used for the task.
|
||||
@@ -47,11 +50,16 @@ class ModelCache:
|
||||
model: The requested model.
|
||||
"""
|
||||
|
||||
key = self.cache.build_key(model_name, model_type)
|
||||
key = self.cache.build_key(model_name, model_type.value)
|
||||
model = await self.cache.get(key)
|
||||
if model is None:
|
||||
async with OptimisticLock(self.cache, key) as lock:
|
||||
model = get_model(model_name, model_type, **model_kwargs)
|
||||
model = await asyncio.get_running_loop().run_in_executor(
|
||||
None,
|
||||
lambda: InferenceModel.from_model_type(
|
||||
model_type, model_name, **model_kwargs
|
||||
),
|
||||
)
|
||||
await lock.cas(model, ttl=self.ttl)
|
||||
return model
|
||||
|
||||
37
machine-learning/app/models/clip.py
Normal file
37
machine-learning/app/models/clip.py
Normal file
@@ -0,0 +1,37 @@
|
||||
from pathlib import Path
|
||||
|
||||
from PIL.Image import Image
|
||||
from sentence_transformers import SentenceTransformer
|
||||
|
||||
from ..schemas import ModelType
|
||||
from .base import InferenceModel
|
||||
|
||||
|
||||
class CLIPSTEncoder(InferenceModel):
|
||||
_model_type = ModelType.CLIP
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model_name: str,
|
||||
cache_dir: Path | None = None,
|
||||
**model_kwargs,
|
||||
):
|
||||
super().__init__(model_name, cache_dir)
|
||||
self.model = SentenceTransformer(
|
||||
self.model_name,
|
||||
cache_folder=self.cache_dir.as_posix(),
|
||||
**model_kwargs,
|
||||
)
|
||||
|
||||
def predict(self, image_or_text: Image | str) -> list[float]:
|
||||
return self.model.encode(image_or_text).tolist()
|
||||
|
||||
|
||||
# stubs to allow different behavior between the two in the future
|
||||
# and handle loading different image and text clip models
|
||||
class CLIPSTVisionEncoder(CLIPSTEncoder):
|
||||
_model_type = ModelType.CLIP_VISION
|
||||
|
||||
|
||||
class CLIPSTTextEncoder(CLIPSTEncoder):
|
||||
_model_type = ModelType.CLIP_TEXT
|
||||
59
machine-learning/app/models/facial_recognition.py
Normal file
59
machine-learning/app/models/facial_recognition.py
Normal file
@@ -0,0 +1,59 @@
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import cv2
|
||||
from insightface.app import FaceAnalysis
|
||||
|
||||
from ..config import settings
|
||||
from ..schemas import ModelType
|
||||
from .base import InferenceModel
|
||||
|
||||
|
||||
class FaceRecognizer(InferenceModel):
|
||||
_model_type = ModelType.FACIAL_RECOGNITION
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model_name: str,
|
||||
min_score: float = settings.min_face_score,
|
||||
cache_dir: Path | None = None,
|
||||
**model_kwargs,
|
||||
):
|
||||
super().__init__(model_name, cache_dir)
|
||||
self.min_score = min_score
|
||||
model = FaceAnalysis(
|
||||
name=self.model_name,
|
||||
root=self.cache_dir.as_posix(),
|
||||
allowed_modules=["detection", "recognition"],
|
||||
**model_kwargs,
|
||||
)
|
||||
model.prepare(
|
||||
ctx_id=0,
|
||||
det_thresh=self.min_score,
|
||||
det_size=(640, 640),
|
||||
)
|
||||
self.model = model
|
||||
|
||||
def predict(self, image: cv2.Mat) -> list[dict[str, Any]]:
|
||||
height, width, _ = image.shape
|
||||
results = []
|
||||
faces = self.model.get(image)
|
||||
|
||||
for face in faces:
|
||||
x1, y1, x2, y2 = face.bbox
|
||||
|
||||
results.append(
|
||||
{
|
||||
"imageWidth": width,
|
||||
"imageHeight": height,
|
||||
"boundingBox": {
|
||||
"x1": round(x1),
|
||||
"y1": round(y1),
|
||||
"x2": round(x2),
|
||||
"y2": round(y2),
|
||||
},
|
||||
"score": face.det_score.item(),
|
||||
"embedding": face.normed_embedding.tolist(),
|
||||
}
|
||||
)
|
||||
return results
|
||||
40
machine-learning/app/models/image_classification.py
Normal file
40
machine-learning/app/models/image_classification.py
Normal file
@@ -0,0 +1,40 @@
|
||||
from pathlib import Path
|
||||
|
||||
from PIL.Image import Image
|
||||
from transformers.pipelines import pipeline
|
||||
|
||||
from ..config import settings
|
||||
from ..schemas import ModelType
|
||||
from .base import InferenceModel
|
||||
|
||||
|
||||
class ImageClassifier(InferenceModel):
|
||||
_model_type = ModelType.IMAGE_CLASSIFICATION
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model_name: str,
|
||||
min_score: float = settings.min_tag_score,
|
||||
cache_dir: Path | None = None,
|
||||
**model_kwargs,
|
||||
):
|
||||
super().__init__(model_name, cache_dir)
|
||||
self.min_score = min_score
|
||||
|
||||
self.model = pipeline(
|
||||
self.model_type.value,
|
||||
self.model_name,
|
||||
model_kwargs={"cache_dir": self.cache_dir, **model_kwargs},
|
||||
)
|
||||
|
||||
def predict(self, image: Image) -> list[str]:
|
||||
predictions = self.model(image)
|
||||
tags = list(
|
||||
{
|
||||
tag
|
||||
for pred in predictions
|
||||
for tag in pred["label"].split(", ")
|
||||
if pred["score"] >= self.min_score
|
||||
}
|
||||
)
|
||||
return tags
|
||||
@@ -1,3 +1,5 @@
|
||||
from enum import Enum
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
@@ -54,3 +56,11 @@ class Face(BaseModel):
|
||||
|
||||
class FaceResponse(BaseModel):
|
||||
__root__: list[Face]
|
||||
|
||||
|
||||
class ModelType(Enum):
|
||||
IMAGE_CLASSIFICATION = "image-classification"
|
||||
CLIP = "clip"
|
||||
CLIP_VISION = "clip-vision"
|
||||
CLIP_TEXT = "clip-text"
|
||||
FACIAL_RECOGNITION = "facial-recognition"
|
||||
|
||||
24
machine-learning/load_test.sh
Executable file
24
machine-learning/load_test.sh
Executable file
@@ -0,0 +1,24 @@
|
||||
export MACHINE_LEARNING_CACHE_FOLDER=/tmp/model_cache
|
||||
export MACHINE_LEARNING_MIN_FACE_SCORE=0.034 # returns 1 face per request; setting this to 0 blows up the number of faces to the thousands
|
||||
export MACHINE_LEARNING_MIN_TAG_SCORE=0.0
|
||||
export PID_FILE=/tmp/locust_pid
|
||||
export LOG_FILE=/tmp/gunicorn.log
|
||||
export HEADLESS=false
|
||||
export HOST=127.0.0.1:3003
|
||||
export CONCURRENCY=4
|
||||
export NUM_ENDPOINTS=3
|
||||
export PYTHONPATH=app
|
||||
|
||||
gunicorn app.main:app --worker-class uvicorn.workers.UvicornWorker \
|
||||
--bind $HOST --daemon --error-logfile $LOG_FILE --pid $PID_FILE
|
||||
while true ; do
|
||||
echo "Loading models..."
|
||||
sleep 5
|
||||
if cat $LOG_FILE | grep -q -E "startup complete"; then break; fi
|
||||
done
|
||||
|
||||
# "users" are assigned only one task, so multiply concurrency by the number of tasks
|
||||
locust --host http://$HOST --web-host 127.0.0.1 \
|
||||
--run-time 120s --users $(($CONCURRENCY * $NUM_ENDPOINTS)) $(if $HEADLESS; then echo "--headless"; fi)
|
||||
|
||||
if [[ -e $PID_FILE ]]; then kill $(cat $PID_FILE); fi
|
||||
52
machine-learning/locustfile.py
Normal file
52
machine-learning/locustfile.py
Normal file
@@ -0,0 +1,52 @@
|
||||
from io import BytesIO
|
||||
|
||||
from locust import HttpUser, events, task
|
||||
from PIL import Image
|
||||
|
||||
|
||||
@events.test_start.add_listener
|
||||
def on_test_start(environment, **kwargs):
|
||||
global byte_image
|
||||
image = Image.new("RGB", (1000, 1000))
|
||||
byte_image = BytesIO()
|
||||
image.save(byte_image, format="jpeg")
|
||||
|
||||
|
||||
class InferenceLoadTest(HttpUser):
|
||||
abstract: bool = True
|
||||
host = "http://127.0.0.1:3003"
|
||||
data: bytes
|
||||
headers: dict[str, str] = {"Content-Type": "image/jpg"}
|
||||
|
||||
# re-use the image across all instances in a process
|
||||
def on_start(self):
|
||||
global byte_image
|
||||
self.data = byte_image.getvalue()
|
||||
|
||||
|
||||
class ClassificationLoadTest(InferenceLoadTest):
|
||||
@task
|
||||
def classify(self):
|
||||
self.client.post(
|
||||
"/image-classifier/tag-image", data=self.data, headers=self.headers
|
||||
)
|
||||
|
||||
|
||||
class CLIPLoadTest(InferenceLoadTest):
|
||||
@task
|
||||
def encode_image(self):
|
||||
self.client.post(
|
||||
"/sentence-transformer/encode-image",
|
||||
data=self.data,
|
||||
headers=self.headers,
|
||||
)
|
||||
|
||||
|
||||
class RecognitionLoadTest(InferenceLoadTest):
|
||||
@task
|
||||
def recognize(self):
|
||||
self.client.post(
|
||||
"/facial-recognition/detect-faces",
|
||||
data=self.data,
|
||||
headers=self.headers,
|
||||
)
|
||||
1472
machine-learning/poetry.lock
generated
1472
machine-learning/poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "machine-learning"
|
||||
version = "1.59.1"
|
||||
version = "1.64.0"
|
||||
description = ""
|
||||
authors = ["Hau Tran <alex.tran1502@gmail.com>"]
|
||||
readme = "README.md"
|
||||
@@ -27,6 +27,8 @@ aiocache = "^0.12.1"
|
||||
mypy = "^1.3.0"
|
||||
black = "^23.3.0"
|
||||
pytest = "^7.3.1"
|
||||
locust = "^2.15.1"
|
||||
gunicorn = "^20.1.0"
|
||||
|
||||
[[tool.poetry.source]]
|
||||
name = "pytorch-cpu"
|
||||
|
||||
@@ -35,8 +35,8 @@ platform :android do
|
||||
task: 'bundle',
|
||||
build_type: 'Release',
|
||||
properties: {
|
||||
"android.injected.version.code" => 86,
|
||||
"android.injected.version.name" => "1.63.2",
|
||||
"android.injected.version.code" => 87,
|
||||
"android.injected.version.name" => "1.64.0",
|
||||
}
|
||||
)
|
||||
upload_to_play_store(skip_upload_apk: true, skip_upload_images: true, skip_upload_screenshots: true, aab: '../build/app/outputs/bundle/release/app-release.aab')
|
||||
|
||||
@@ -19,7 +19,7 @@ platform :ios do
|
||||
desc "iOS Beta"
|
||||
lane :beta do
|
||||
increment_version_number(
|
||||
version_number: "1.63.2"
|
||||
version_number: "1.64.0"
|
||||
)
|
||||
increment_build_number(
|
||||
build_number: latest_testflight_build_number + 1,
|
||||
|
||||
@@ -0,0 +1,21 @@
|
||||
import 'package:hooks_riverpod/hooks_riverpod.dart';
|
||||
|
||||
final showControlsProvider = StateNotifierProvider<ShowControls, bool>((ref) {
|
||||
return ShowControls(ref);
|
||||
});
|
||||
|
||||
class ShowControls extends StateNotifier<bool> {
|
||||
ShowControls(this.ref) : super(true);
|
||||
|
||||
final Ref ref;
|
||||
|
||||
bool get show => state;
|
||||
|
||||
set show(bool value) {
|
||||
state = value;
|
||||
}
|
||||
|
||||
void toggle() {
|
||||
state = !state;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,46 @@
|
||||
import 'package:hooks_riverpod/hooks_riverpod.dart';
|
||||
|
||||
class VideoPlaybackControls {
|
||||
VideoPlaybackControls({required this.position, required this.mute});
|
||||
|
||||
final double position;
|
||||
final bool mute;
|
||||
}
|
||||
|
||||
final videoPlayerControlsProvider =
|
||||
StateNotifierProvider<VideoPlayerControls, VideoPlaybackControls>((ref) {
|
||||
return VideoPlayerControls(ref);
|
||||
});
|
||||
|
||||
class VideoPlayerControls extends StateNotifier<VideoPlaybackControls> {
|
||||
VideoPlayerControls(this.ref)
|
||||
: super(
|
||||
VideoPlaybackControls(
|
||||
position: 0,
|
||||
mute: false,
|
||||
),
|
||||
);
|
||||
|
||||
final Ref ref;
|
||||
|
||||
VideoPlaybackControls get value => state;
|
||||
|
||||
set value(VideoPlaybackControls value) {
|
||||
state = value;
|
||||
}
|
||||
|
||||
double get position => state.position;
|
||||
bool get mute => state.mute;
|
||||
|
||||
set position(double value) {
|
||||
state = VideoPlaybackControls(position: value, mute: state.mute);
|
||||
}
|
||||
|
||||
set mute(bool value) {
|
||||
state = VideoPlaybackControls(position: state.position, mute: value);
|
||||
}
|
||||
|
||||
void toggleMute() {
|
||||
state = VideoPlaybackControls(position: state.position, mute: !state.mute);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,35 @@
|
||||
import 'package:hooks_riverpod/hooks_riverpod.dart';
|
||||
|
||||
class VideoPlaybackValue {
|
||||
VideoPlaybackValue({required this.position, required this.duration});
|
||||
|
||||
final Duration position;
|
||||
final Duration duration;
|
||||
}
|
||||
|
||||
final videoPlaybackValueProvider =
|
||||
StateNotifierProvider<VideoPlaybackValueState, VideoPlaybackValue>((ref) {
|
||||
return VideoPlaybackValueState(ref);
|
||||
});
|
||||
|
||||
class VideoPlaybackValueState extends StateNotifier<VideoPlaybackValue> {
|
||||
VideoPlaybackValueState(this.ref)
|
||||
: super(
|
||||
VideoPlaybackValue(
|
||||
position: Duration.zero,
|
||||
duration: Duration.zero,
|
||||
),
|
||||
);
|
||||
|
||||
final Ref ref;
|
||||
|
||||
VideoPlaybackValue get value => state;
|
||||
|
||||
set value(VideoPlaybackValue value) {
|
||||
state = value;
|
||||
}
|
||||
|
||||
set position(Duration value) {
|
||||
state = VideoPlaybackValue(position: value, duration: state.duration);
|
||||
}
|
||||
}
|
||||
57
mobile/lib/modules/asset_viewer/ui/animated_play_pause.dart
Normal file
57
mobile/lib/modules/asset_viewer/ui/animated_play_pause.dart
Normal file
@@ -0,0 +1,57 @@
|
||||
import 'package:flutter/material.dart';
|
||||
|
||||
/// A widget that animates implicitly between a play and a pause icon.
|
||||
class AnimatedPlayPause extends StatefulWidget {
|
||||
const AnimatedPlayPause({
|
||||
Key? key,
|
||||
required this.playing,
|
||||
this.size,
|
||||
this.color,
|
||||
}) : super(key: key);
|
||||
|
||||
final double? size;
|
||||
final bool playing;
|
||||
final Color? color;
|
||||
|
||||
@override
|
||||
State<StatefulWidget> createState() => AnimatedPlayPauseState();
|
||||
}
|
||||
|
||||
class AnimatedPlayPauseState extends State<AnimatedPlayPause>
|
||||
with SingleTickerProviderStateMixin {
|
||||
late final animationController = AnimationController(
|
||||
vsync: this,
|
||||
value: widget.playing ? 1 : 0,
|
||||
duration: const Duration(milliseconds: 300),
|
||||
);
|
||||
|
||||
@override
|
||||
void didUpdateWidget(AnimatedPlayPause oldWidget) {
|
||||
super.didUpdateWidget(oldWidget);
|
||||
if (widget.playing != oldWidget.playing) {
|
||||
if (widget.playing) {
|
||||
animationController.forward();
|
||||
} else {
|
||||
animationController.reverse();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@override
|
||||
void dispose() {
|
||||
animationController.dispose();
|
||||
super.dispose();
|
||||
}
|
||||
|
||||
@override
|
||||
Widget build(BuildContext context) {
|
||||
return Center(
|
||||
child: AnimatedIcon(
|
||||
color: widget.color,
|
||||
size: widget.size,
|
||||
icon: AnimatedIcons.play_pause,
|
||||
progress: animationController,
|
||||
),
|
||||
);
|
||||
}
|
||||
}
|
||||
53
mobile/lib/modules/asset_viewer/ui/center_play_button.dart
Normal file
53
mobile/lib/modules/asset_viewer/ui/center_play_button.dart
Normal file
@@ -0,0 +1,53 @@
|
||||
import 'package:flutter/material.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/ui/animated_play_pause.dart';
|
||||
|
||||
class CenterPlayButton extends StatelessWidget {
|
||||
const CenterPlayButton({
|
||||
Key? key,
|
||||
required this.backgroundColor,
|
||||
this.iconColor,
|
||||
required this.show,
|
||||
required this.isPlaying,
|
||||
required this.isFinished,
|
||||
this.onPressed,
|
||||
}) : super(key: key);
|
||||
|
||||
final Color backgroundColor;
|
||||
final Color? iconColor;
|
||||
final bool show;
|
||||
final bool isPlaying;
|
||||
final bool isFinished;
|
||||
final VoidCallback? onPressed;
|
||||
|
||||
@override
|
||||
Widget build(BuildContext context) {
|
||||
return ColoredBox(
|
||||
color: Colors.transparent,
|
||||
child: Center(
|
||||
child: UnconstrainedBox(
|
||||
child: AnimatedOpacity(
|
||||
opacity: show ? 1.0 : 0.0,
|
||||
duration: const Duration(milliseconds: 100),
|
||||
child: DecoratedBox(
|
||||
decoration: BoxDecoration(
|
||||
color: backgroundColor,
|
||||
shape: BoxShape.circle,
|
||||
),
|
||||
child: IconButton(
|
||||
iconSize: 32,
|
||||
padding: const EdgeInsets.all(12.0),
|
||||
icon: isFinished
|
||||
? Icon(Icons.replay, color: iconColor)
|
||||
: AnimatedPlayPause(
|
||||
color: iconColor,
|
||||
playing: isPlaying,
|
||||
),
|
||||
onPressed: onPressed,
|
||||
),
|
||||
),
|
||||
),
|
||||
),
|
||||
),
|
||||
);
|
||||
}
|
||||
}
|
||||
207
mobile/lib/modules/asset_viewer/ui/video_player_controls.dart
Normal file
207
mobile/lib/modules/asset_viewer/ui/video_player_controls.dart
Normal file
@@ -0,0 +1,207 @@
|
||||
import 'dart:async';
|
||||
|
||||
import 'package:chewie/chewie.dart';
|
||||
import 'package:flutter/material.dart';
|
||||
import 'package:hooks_riverpod/hooks_riverpod.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/providers/show_controls.provider.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/providers/video_player_controls_provider.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/providers/video_player_value_provider.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/ui/center_play_button.dart';
|
||||
import 'package:immich_mobile/shared/ui/immich_loading_indicator.dart';
|
||||
import 'package:video_player/video_player.dart';
|
||||
|
||||
class VideoPlayerControls extends ConsumerStatefulWidget {
|
||||
const VideoPlayerControls({
|
||||
Key? key,
|
||||
}) : super(key: key);
|
||||
|
||||
@override
|
||||
VideoPlayerControlsState createState() => VideoPlayerControlsState();
|
||||
}
|
||||
|
||||
class VideoPlayerControlsState extends ConsumerState<VideoPlayerControls>
|
||||
with SingleTickerProviderStateMixin {
|
||||
late VideoPlayerController controller;
|
||||
late VideoPlayerValue _latestValue;
|
||||
bool _displayBufferingIndicator = false;
|
||||
double? _latestVolume;
|
||||
Timer? _hideTimer;
|
||||
|
||||
ChewieController? _chewieController;
|
||||
ChewieController get chewieController => _chewieController!;
|
||||
|
||||
@override
|
||||
Widget build(BuildContext context) {
|
||||
ref.listen(videoPlayerControlsProvider.select((value) => value.mute),
|
||||
(_, value) {
|
||||
_mute(value);
|
||||
_cancelAndRestartTimer();
|
||||
});
|
||||
|
||||
ref.listen(videoPlayerControlsProvider.select((value) => value.position),
|
||||
(_, position) {
|
||||
_seekTo(position);
|
||||
_cancelAndRestartTimer();
|
||||
});
|
||||
|
||||
if (_latestValue.hasError) {
|
||||
return chewieController.errorBuilder?.call(
|
||||
context,
|
||||
chewieController.videoPlayerController.value.errorDescription!,
|
||||
) ??
|
||||
const Center(
|
||||
child: Icon(
|
||||
Icons.error,
|
||||
color: Colors.white,
|
||||
size: 42,
|
||||
),
|
||||
);
|
||||
}
|
||||
|
||||
return GestureDetector(
|
||||
onTap: () => _cancelAndRestartTimer(),
|
||||
child: AbsorbPointer(
|
||||
absorbing: !ref.watch(showControlsProvider),
|
||||
child: Stack(
|
||||
children: [
|
||||
if (_displayBufferingIndicator)
|
||||
const Center(
|
||||
child: ImmichLoadingIndicator(),
|
||||
)
|
||||
else
|
||||
_buildHitArea(),
|
||||
],
|
||||
),
|
||||
),
|
||||
);
|
||||
}
|
||||
|
||||
@override
|
||||
void dispose() {
|
||||
_dispose();
|
||||
super.dispose();
|
||||
}
|
||||
|
||||
void _dispose() {
|
||||
controller.removeListener(_updateState);
|
||||
_hideTimer?.cancel();
|
||||
}
|
||||
|
||||
@override
|
||||
void didChangeDependencies() {
|
||||
final oldController = _chewieController;
|
||||
_chewieController = ChewieController.of(context);
|
||||
controller = chewieController.videoPlayerController;
|
||||
|
||||
if (oldController != chewieController) {
|
||||
_dispose();
|
||||
_initialize();
|
||||
}
|
||||
|
||||
super.didChangeDependencies();
|
||||
}
|
||||
|
||||
Widget _buildHitArea() {
|
||||
final bool isFinished = _latestValue.position >= _latestValue.duration;
|
||||
|
||||
return GestureDetector(
|
||||
onTap: () {
|
||||
if (_latestValue.isPlaying) {
|
||||
ref.read(showControlsProvider.notifier).show = false;
|
||||
} else {
|
||||
_playPause();
|
||||
ref.read(showControlsProvider.notifier).show = false;
|
||||
}
|
||||
},
|
||||
child: CenterPlayButton(
|
||||
backgroundColor: Colors.black54,
|
||||
iconColor: Colors.white,
|
||||
isFinished: isFinished,
|
||||
isPlaying: controller.value.isPlaying,
|
||||
show: ref.watch(showControlsProvider),
|
||||
onPressed: _playPause,
|
||||
),
|
||||
);
|
||||
}
|
||||
|
||||
void _cancelAndRestartTimer() {
|
||||
_hideTimer?.cancel();
|
||||
_startHideTimer();
|
||||
ref.read(showControlsProvider.notifier).show = true;
|
||||
}
|
||||
|
||||
Future<void> _initialize() async {
|
||||
_mute(ref.read(videoPlayerControlsProvider.select((value) => value.mute)));
|
||||
|
||||
controller.addListener(_updateState);
|
||||
_latestValue = controller.value;
|
||||
|
||||
if (controller.value.isPlaying || chewieController.autoPlay) {
|
||||
_startHideTimer();
|
||||
}
|
||||
}
|
||||
|
||||
void _playPause() {
|
||||
final isFinished = _latestValue.position >= _latestValue.duration;
|
||||
|
||||
setState(() {
|
||||
if (controller.value.isPlaying) {
|
||||
ref.read(showControlsProvider.notifier).show = true;
|
||||
_hideTimer?.cancel();
|
||||
controller.pause();
|
||||
} else {
|
||||
_cancelAndRestartTimer();
|
||||
|
||||
if (!controller.value.isInitialized) {
|
||||
controller.initialize().then((_) {
|
||||
controller.play();
|
||||
});
|
||||
} else {
|
||||
if (isFinished) {
|
||||
controller.seekTo(Duration.zero);
|
||||
}
|
||||
controller.play();
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
void _startHideTimer() {
|
||||
final hideControlsTimer = chewieController.hideControlsTimer.isNegative
|
||||
? ChewieController.defaultHideControlsTimer
|
||||
: chewieController.hideControlsTimer;
|
||||
_hideTimer = Timer(hideControlsTimer, () {
|
||||
ref.read(showControlsProvider.notifier).show = false;
|
||||
});
|
||||
}
|
||||
|
||||
void _updateState() {
|
||||
if (!mounted) return;
|
||||
|
||||
_displayBufferingIndicator = controller.value.isBuffering;
|
||||
|
||||
setState(() {
|
||||
_latestValue = controller.value;
|
||||
ref.read(videoPlaybackValueProvider.notifier).value = VideoPlaybackValue(
|
||||
position: _latestValue.position,
|
||||
duration: _latestValue.duration,
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
void _mute(bool mute) {
|
||||
if (mute) {
|
||||
_latestVolume = controller.value.volume;
|
||||
controller.setVolume(0);
|
||||
} else {
|
||||
controller.setVolume(_latestVolume ?? 0.5);
|
||||
}
|
||||
}
|
||||
|
||||
void _seekTo(double position) {
|
||||
final Duration pos = controller.value.duration * (position / 100.0);
|
||||
if (pos != controller.value.position) {
|
||||
controller.seekTo(pos);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,4 +1,5 @@
|
||||
import 'dart:io';
|
||||
import 'dart:math';
|
||||
import 'package:easy_localization/easy_localization.dart';
|
||||
import 'package:auto_route/auto_route.dart';
|
||||
import 'package:cached_network_image/cached_network_image.dart';
|
||||
@@ -6,8 +7,11 @@ import 'package:flutter/material.dart';
|
||||
import 'package:flutter/services.dart';
|
||||
import 'package:flutter_hooks/flutter_hooks.dart' hide Store;
|
||||
import 'package:hooks_riverpod/hooks_riverpod.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/providers/show_controls.provider.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/providers/video_player_controls_provider.dart';
|
||||
import 'package:immich_mobile/modules/album/ui/add_to_album_bottom_sheet.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/providers/image_viewer_page_state.provider.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/providers/video_player_value_provider.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/ui/advanced_bottom_sheet.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/ui/exif_bottom_sheet.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/ui/top_control_app_bar.dart';
|
||||
@@ -49,9 +53,9 @@ class GalleryViewerPage extends HookConsumerWidget {
|
||||
final isLoadPreview = useState(AppSettingsEnum.loadPreview.defaultValue);
|
||||
final isLoadOriginal = useState(AppSettingsEnum.loadOriginal.defaultValue);
|
||||
final isZoomed = useState<bool>(false);
|
||||
final showAppBar = useState<bool>(true);
|
||||
final isPlayingMotionVideo = useState(false);
|
||||
final isPlayingVideo = useState(false);
|
||||
final progressValue = useState(0.0);
|
||||
Offset? localPosition;
|
||||
final authToken = 'Bearer ${Store.get(StoreKey.accessToken)}';
|
||||
final currentIndex = useState(initialIndex);
|
||||
@@ -60,15 +64,6 @@ class GalleryViewerPage extends HookConsumerWidget {
|
||||
|
||||
Asset asset() => watchedAsset.value ?? currentAsset;
|
||||
|
||||
showAppBar.addListener(() {
|
||||
// Change to and from immersive mode, hiding navigation and app bar
|
||||
if (showAppBar.value) {
|
||||
SystemChrome.setEnabledSystemUIMode(SystemUiMode.edgeToEdge);
|
||||
} else {
|
||||
SystemChrome.setEnabledSystemUIMode(SystemUiMode.immersive);
|
||||
}
|
||||
});
|
||||
|
||||
useEffect(
|
||||
() {
|
||||
isLoadPreview.value =
|
||||
@@ -277,15 +272,11 @@ class GalleryViewerPage extends HookConsumerWidget {
|
||||
}
|
||||
|
||||
buildAppBar() {
|
||||
final show = (showAppBar.value || // onTap has the final say
|
||||
(showAppBar.value && !isZoomed.value)) &&
|
||||
!isPlayingVideo.value;
|
||||
|
||||
return IgnorePointer(
|
||||
ignoring: !show,
|
||||
ignoring: !ref.watch(showControlsProvider),
|
||||
child: AnimatedOpacity(
|
||||
duration: const Duration(milliseconds: 100),
|
||||
opacity: show ? 1.0 : 0.0,
|
||||
opacity: ref.watch(showControlsProvider) ? 1.0 : 0.0,
|
||||
child: Container(
|
||||
color: Colors.black.withOpacity(0.4),
|
||||
child: TopControlAppBar(
|
||||
@@ -313,65 +304,160 @@ class GalleryViewerPage extends HookConsumerWidget {
|
||||
);
|
||||
}
|
||||
|
||||
buildBottomBar() {
|
||||
final show = (showAppBar.value || // onTap has the final say
|
||||
(showAppBar.value && !isZoomed.value)) &&
|
||||
!isPlayingVideo.value;
|
||||
Widget buildProgressBar() {
|
||||
final playerValue = ref.watch(videoPlaybackValueProvider);
|
||||
|
||||
return Expanded(
|
||||
child: Slider(
|
||||
value: playerValue.duration == Duration.zero
|
||||
? 0.0
|
||||
: min(
|
||||
playerValue.position.inMicroseconds /
|
||||
playerValue.duration.inMicroseconds *
|
||||
100,
|
||||
100,
|
||||
),
|
||||
min: 0,
|
||||
max: 100,
|
||||
thumbColor: Colors.white,
|
||||
activeColor: Colors.white,
|
||||
inactiveColor: Colors.white.withOpacity(0.75),
|
||||
onChanged: (position) {
|
||||
progressValue.value = position;
|
||||
ref.read(videoPlayerControlsProvider.notifier).position = position;
|
||||
},
|
||||
),
|
||||
);
|
||||
}
|
||||
|
||||
Text buildPosition() {
|
||||
final position = ref
|
||||
.watch(videoPlaybackValueProvider.select((value) => value.position));
|
||||
|
||||
return Text(
|
||||
_formatDuration(position),
|
||||
style: TextStyle(
|
||||
fontSize: 14.0,
|
||||
color: Colors.white.withOpacity(.75),
|
||||
fontWeight: FontWeight.normal,
|
||||
),
|
||||
);
|
||||
}
|
||||
|
||||
Text buildDuration() {
|
||||
final duration = ref
|
||||
.watch(videoPlaybackValueProvider.select((value) => value.duration));
|
||||
|
||||
return Text(
|
||||
_formatDuration(duration),
|
||||
style: TextStyle(
|
||||
fontSize: 14.0,
|
||||
color: Colors.white.withOpacity(.75),
|
||||
fontWeight: FontWeight.normal,
|
||||
),
|
||||
);
|
||||
}
|
||||
|
||||
Widget buildMuteButton() {
|
||||
return IconButton(
|
||||
icon: Icon(
|
||||
ref.watch(videoPlayerControlsProvider.select((value) => value.mute))
|
||||
? Icons.volume_off
|
||||
: Icons.volume_up,
|
||||
),
|
||||
onPressed: () =>
|
||||
ref.read(videoPlayerControlsProvider.notifier).toggleMute(),
|
||||
color: Colors.white,
|
||||
);
|
||||
}
|
||||
|
||||
buildBottomBar() {
|
||||
return IgnorePointer(
|
||||
ignoring: !show,
|
||||
ignoring: !ref.watch(showControlsProvider),
|
||||
child: AnimatedOpacity(
|
||||
duration: const Duration(milliseconds: 100),
|
||||
opacity: show ? 1.0 : 0.0,
|
||||
child: BottomNavigationBar(
|
||||
backgroundColor: Colors.black.withOpacity(0.4),
|
||||
unselectedIconTheme: const IconThemeData(color: Colors.white),
|
||||
selectedIconTheme: const IconThemeData(color: Colors.white),
|
||||
unselectedLabelStyle: const TextStyle(color: Colors.black),
|
||||
selectedLabelStyle: const TextStyle(color: Colors.black),
|
||||
showSelectedLabels: false,
|
||||
showUnselectedLabels: false,
|
||||
items: [
|
||||
BottomNavigationBarItem(
|
||||
icon: const Icon(Icons.ios_share_rounded),
|
||||
label: 'control_bottom_app_bar_share'.tr(),
|
||||
tooltip: 'control_bottom_app_bar_share'.tr(),
|
||||
),
|
||||
asset().isArchived
|
||||
? BottomNavigationBarItem(
|
||||
icon: const Icon(Icons.unarchive_rounded),
|
||||
label: 'control_bottom_app_bar_unarchive'.tr(),
|
||||
tooltip: 'control_bottom_app_bar_unarchive'.tr(),
|
||||
)
|
||||
: BottomNavigationBarItem(
|
||||
icon: const Icon(Icons.archive_outlined),
|
||||
label: 'control_bottom_app_bar_archive'.tr(),
|
||||
tooltip: 'control_bottom_app_bar_archive'.tr(),
|
||||
opacity: ref.watch(showControlsProvider) ? 1.0 : 0.0,
|
||||
child: Column(
|
||||
children: [
|
||||
Visibility(
|
||||
visible: !asset().isImage && !isPlayingMotionVideo.value,
|
||||
child: Container(
|
||||
color: Colors.black.withOpacity(0.4),
|
||||
child: Padding(
|
||||
padding: MediaQuery.of(context).orientation ==
|
||||
Orientation.portrait
|
||||
? const EdgeInsets.symmetric(horizontal: 12.0)
|
||||
: const EdgeInsets.symmetric(horizontal: 64.0),
|
||||
child: Row(
|
||||
children: [
|
||||
buildPosition(),
|
||||
buildProgressBar(),
|
||||
buildDuration(),
|
||||
buildMuteButton(),
|
||||
],
|
||||
),
|
||||
BottomNavigationBarItem(
|
||||
icon: const Icon(Icons.delete_outline),
|
||||
label: 'control_bottom_app_bar_delete'.tr(),
|
||||
tooltip: 'control_bottom_app_bar_delete'.tr(),
|
||||
),
|
||||
),
|
||||
),
|
||||
BottomNavigationBar(
|
||||
backgroundColor: Colors.black.withOpacity(0.4),
|
||||
unselectedIconTheme: const IconThemeData(color: Colors.white),
|
||||
selectedIconTheme: const IconThemeData(color: Colors.white),
|
||||
unselectedLabelStyle: const TextStyle(color: Colors.black),
|
||||
selectedLabelStyle: const TextStyle(color: Colors.black),
|
||||
showSelectedLabels: false,
|
||||
showUnselectedLabels: false,
|
||||
items: [
|
||||
BottomNavigationBarItem(
|
||||
icon: const Icon(Icons.ios_share_rounded),
|
||||
label: 'control_bottom_app_bar_share'.tr(),
|
||||
tooltip: 'control_bottom_app_bar_share'.tr(),
|
||||
),
|
||||
asset().isArchived
|
||||
? BottomNavigationBarItem(
|
||||
icon: const Icon(Icons.unarchive_rounded),
|
||||
label: 'control_bottom_app_bar_unarchive'.tr(),
|
||||
tooltip: 'control_bottom_app_bar_unarchive'.tr(),
|
||||
)
|
||||
: BottomNavigationBarItem(
|
||||
icon: const Icon(Icons.archive_outlined),
|
||||
label: 'control_bottom_app_bar_archive'.tr(),
|
||||
tooltip: 'control_bottom_app_bar_archive'.tr(),
|
||||
),
|
||||
BottomNavigationBarItem(
|
||||
icon: const Icon(Icons.delete_outline),
|
||||
label: 'control_bottom_app_bar_delete'.tr(),
|
||||
tooltip: 'control_bottom_app_bar_delete'.tr(),
|
||||
),
|
||||
],
|
||||
onTap: (index) {
|
||||
switch (index) {
|
||||
case 0:
|
||||
shareAsset();
|
||||
break;
|
||||
case 1:
|
||||
handleArchive(asset());
|
||||
break;
|
||||
case 2:
|
||||
handleDelete(asset());
|
||||
break;
|
||||
}
|
||||
},
|
||||
),
|
||||
],
|
||||
onTap: (index) {
|
||||
switch (index) {
|
||||
case 0:
|
||||
shareAsset();
|
||||
break;
|
||||
case 1:
|
||||
handleArchive(asset());
|
||||
break;
|
||||
case 2:
|
||||
handleDelete(asset());
|
||||
break;
|
||||
}
|
||||
},
|
||||
),
|
||||
),
|
||||
);
|
||||
}
|
||||
|
||||
ref.listen(showControlsProvider, (_, show) {
|
||||
if (show) {
|
||||
SystemChrome.setEnabledSystemUIMode(SystemUiMode.edgeToEdge);
|
||||
} else {
|
||||
SystemChrome.setEnabledSystemUIMode(SystemUiMode.immersive);
|
||||
}
|
||||
});
|
||||
|
||||
ImageProvider imageProvider(Asset asset) {
|
||||
if (asset.isLocal) {
|
||||
return localImageProvider(asset);
|
||||
@@ -405,7 +491,6 @@ class GalleryViewerPage extends HookConsumerWidget {
|
||||
PhotoViewGallery.builder(
|
||||
scaleStateChangedCallback: (state) {
|
||||
isZoomed.value = state != PhotoViewScaleState.initial;
|
||||
showAppBar.value = !isZoomed.value;
|
||||
},
|
||||
pageController: controller,
|
||||
scrollPhysics: isZoomed.value
|
||||
@@ -426,6 +511,8 @@ class GalleryViewerPage extends HookConsumerWidget {
|
||||
precacheNextImage(value - 1);
|
||||
}
|
||||
currentIndex.value = value;
|
||||
progressValue.value = 0.0;
|
||||
|
||||
HapticFeedback.selectionClick();
|
||||
},
|
||||
loadingBuilder: isLoadPreview.value
|
||||
@@ -493,8 +580,9 @@ class GalleryViewerPage extends HookConsumerWidget {
|
||||
localPosition = details.localPosition,
|
||||
onDragUpdate: (_, details, __) =>
|
||||
handleSwipeUpDown(details),
|
||||
onTapDown: (_, __, ___) =>
|
||||
showAppBar.value = !showAppBar.value,
|
||||
onTapDown: (_, __, ___) {
|
||||
ref.read(showControlsProvider.notifier).toggle();
|
||||
},
|
||||
imageProvider: provider,
|
||||
heroAttributes: PhotoViewHeroAttributes(
|
||||
tag: asset.id,
|
||||
@@ -519,7 +607,7 @@ class GalleryViewerPage extends HookConsumerWidget {
|
||||
filterQuality: FilterQuality.high,
|
||||
maxScale: 1.0,
|
||||
minScale: 1.0,
|
||||
basePosition: Alignment.bottomCenter,
|
||||
basePosition: Alignment.center,
|
||||
child: VideoViewerPage(
|
||||
onPlaying: () => isPlayingVideo.value = true,
|
||||
onPaused: () => isPlayingVideo.value = false,
|
||||
@@ -559,4 +647,37 @@ class GalleryViewerPage extends HookConsumerWidget {
|
||||
),
|
||||
);
|
||||
}
|
||||
|
||||
String _formatDuration(Duration position) {
|
||||
final ms = position.inMilliseconds;
|
||||
|
||||
int seconds = ms ~/ 1000;
|
||||
final int hours = seconds ~/ 3600;
|
||||
seconds = seconds % 3600;
|
||||
final minutes = seconds ~/ 60;
|
||||
seconds = seconds % 60;
|
||||
|
||||
final hoursString = hours >= 10
|
||||
? '$hours'
|
||||
: hours == 0
|
||||
? '00'
|
||||
: '0$hours';
|
||||
|
||||
final minutesString = minutes >= 10
|
||||
? '$minutes'
|
||||
: minutes == 0
|
||||
? '00'
|
||||
: '0$minutes';
|
||||
|
||||
final secondsString = seconds >= 10
|
||||
? '$seconds'
|
||||
: seconds == 0
|
||||
? '00'
|
||||
: '0$seconds';
|
||||
|
||||
final formattedTime =
|
||||
'${hoursString == '00' ? '' : '$hoursString:'}$minutesString:$secondsString';
|
||||
|
||||
return formattedTime;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,11 +5,13 @@ import 'package:hooks_riverpod/hooks_riverpod.dart';
|
||||
import 'package:chewie/chewie.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/models/image_viewer_page_state.model.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/providers/image_viewer_page_state.provider.dart';
|
||||
import 'package:immich_mobile/modules/asset_viewer/ui/video_player_controls.dart';
|
||||
import 'package:immich_mobile/shared/models/asset.dart';
|
||||
import 'package:immich_mobile/shared/models/store.dart';
|
||||
import 'package:immich_mobile/shared/ui/immich_loading_indicator.dart';
|
||||
import 'package:photo_manager/photo_manager.dart';
|
||||
import 'package:video_player/video_player.dart';
|
||||
import 'package:wakelock/wakelock.dart';
|
||||
|
||||
// ignore: must_be_immutable
|
||||
class VideoViewerPage extends HookConsumerWidget {
|
||||
@@ -130,13 +132,16 @@ class _VideoPlayerState extends State<VideoPlayer> {
|
||||
videoPlayerController.addListener(() {
|
||||
if (videoPlayerController.value.isInitialized) {
|
||||
if (videoPlayerController.value.isPlaying) {
|
||||
Wakelock.enable();
|
||||
widget.onPlaying?.call();
|
||||
} else if (!videoPlayerController.value.isPlaying) {
|
||||
Wakelock.disable();
|
||||
widget.onPaused?.call();
|
||||
}
|
||||
|
||||
if (videoPlayerController.value.position ==
|
||||
videoPlayerController.value.duration) {
|
||||
Wakelock.disable();
|
||||
widget.onVideoEnded();
|
||||
}
|
||||
}
|
||||
@@ -170,9 +175,10 @@ class _VideoPlayerState extends State<VideoPlayer> {
|
||||
videoPlayerController: videoPlayerController,
|
||||
autoPlay: true,
|
||||
autoInitialize: true,
|
||||
allowFullScreen: true,
|
||||
allowFullScreen: false,
|
||||
allowedScreenSleep: false,
|
||||
showControls: !widget.isMotionVideo,
|
||||
customControls: const VideoPlayerControls(),
|
||||
hideControlsTimer: const Duration(seconds: 5),
|
||||
);
|
||||
}
|
||||
|
||||
@@ -16,7 +16,9 @@ import 'package:immich_mobile/shared/providers/db.provider.dart';
|
||||
import 'package:immich_mobile/shared/services/api.service.dart';
|
||||
import 'package:immich_mobile/utils/files_helper.dart';
|
||||
import 'package:isar/isar.dart';
|
||||
import 'package:logging/logging.dart';
|
||||
import 'package:openapi/api.dart';
|
||||
import 'package:permission_handler/permission_handler.dart';
|
||||
import 'package:photo_manager/photo_manager.dart';
|
||||
import 'package:http_parser/http_parser.dart';
|
||||
import 'package:path/path.dart' as p;
|
||||
@@ -33,6 +35,7 @@ class BackupService {
|
||||
final httpClient = http.Client();
|
||||
final ApiService _apiService;
|
||||
final Isar _db;
|
||||
final Logger _log = Logger("BackupService");
|
||||
|
||||
BackupService(this._apiService, this._db);
|
||||
|
||||
@@ -203,6 +206,14 @@ class BackupService {
|
||||
Function(CurrentUploadAsset) setCurrentUploadAssetCb,
|
||||
Function(ErrorUploadAsset) errorCb,
|
||||
) async {
|
||||
if (Platform.isAndroid &&
|
||||
!(await Permission.accessMediaLocation.status).isGranted) {
|
||||
// double check that permission is granted here, to guard against
|
||||
// uploading corrupt assets without EXIF information
|
||||
_log.warning("Media location permission is not granted. "
|
||||
"Cannot access original assets for backup.");
|
||||
return false;
|
||||
}
|
||||
final String deviceId = Store.get(StoreKey.deviceId);
|
||||
final String savedEndpoint = Store.get(StoreKey.serverEndpoint);
|
||||
File? file;
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import 'package:immich_mobile/shared/services/api.service.dart';
|
||||
import 'package:logging/logging.dart';
|
||||
import 'package:openapi/api.dart';
|
||||
import 'package:flutter_web_auth/flutter_web_auth.dart';
|
||||
|
||||
@@ -7,7 +8,7 @@ import 'package:flutter_web_auth/flutter_web_auth.dart';
|
||||
class OAuthService {
|
||||
final ApiService _apiService;
|
||||
final callbackUrlScheme = 'app.immich';
|
||||
|
||||
final log = Logger('OAuthService');
|
||||
OAuthService(this._apiService);
|
||||
|
||||
Future<OAuthConfigResponseDto?> getOAuthServerConfig(
|
||||
@@ -33,7 +34,8 @@ class OAuthService {
|
||||
url: result,
|
||||
),
|
||||
);
|
||||
} catch (e) {
|
||||
} catch (e, stack) {
|
||||
log.severe("Error performing oAuthLogin: ${e.toString()}", e, stack);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,7 +6,7 @@ import 'package:permission_handler/permission_handler.dart';
|
||||
|
||||
class GalleryPermissionNotifier extends StateNotifier<PermissionStatus> {
|
||||
GalleryPermissionNotifier()
|
||||
: super(PermissionStatus.denied) // Denied is the intitial state
|
||||
: super(PermissionStatus.denied) // Denied is the intitial state
|
||||
{
|
||||
// Sets the initial state
|
||||
getGalleryPermissionStatus();
|
||||
@@ -16,19 +16,20 @@ class GalleryPermissionNotifier extends StateNotifier<PermissionStatus> {
|
||||
|
||||
/// Requests the gallery permission
|
||||
Future<PermissionStatus> requestGalleryPermission() async {
|
||||
PermissionStatus result;
|
||||
// Android 32 and below uses Permission.storage
|
||||
if (Platform.isAndroid) {
|
||||
final androidInfo = await DeviceInfoPlugin().androidInfo;
|
||||
if (androidInfo.version.sdkInt <= 32) {
|
||||
// Android 32 and below need storage
|
||||
final permission = await Permission.storage.request();
|
||||
state = permission;
|
||||
return permission;
|
||||
result = permission;
|
||||
} else {
|
||||
// Android 33 need photo & video
|
||||
final photos = await Permission.photos.request();
|
||||
if (!photos.isGranted) {
|
||||
// Don't ask twice for the same permission
|
||||
state = photos;
|
||||
return photos;
|
||||
}
|
||||
final videos = await Permission.videos.request();
|
||||
@@ -45,28 +46,32 @@ class GalleryPermissionNotifier extends StateNotifier<PermissionStatus> {
|
||||
status = PermissionStatus.denied;
|
||||
}
|
||||
|
||||
state = status;
|
||||
return status;
|
||||
result = status;
|
||||
}
|
||||
if (result == PermissionStatus.granted &&
|
||||
androidInfo.version.sdkInt >= 29) {
|
||||
result = await Permission.accessMediaLocation.request();
|
||||
}
|
||||
} else {
|
||||
// iOS can use photos
|
||||
final photos = await Permission.photos.request();
|
||||
state = photos;
|
||||
return photos;
|
||||
result = photos;
|
||||
}
|
||||
state = result;
|
||||
return result;
|
||||
}
|
||||
|
||||
/// Checks the current state of the gallery permissions without
|
||||
/// requesting them again
|
||||
Future<PermissionStatus> getGalleryPermissionStatus() async {
|
||||
PermissionStatus result;
|
||||
// Android 32 and below uses Permission.storage
|
||||
if (Platform.isAndroid) {
|
||||
final androidInfo = await DeviceInfoPlugin().androidInfo;
|
||||
if (androidInfo.version.sdkInt <= 32) {
|
||||
// Android 32 and below need storage
|
||||
final permission = await Permission.storage.status;
|
||||
state = permission;
|
||||
return permission;
|
||||
result = permission;
|
||||
} else {
|
||||
// Android 33 needs photo & video
|
||||
final photos = await Permission.photos.status;
|
||||
@@ -84,18 +89,23 @@ class GalleryPermissionNotifier extends StateNotifier<PermissionStatus> {
|
||||
status = PermissionStatus.denied;
|
||||
}
|
||||
|
||||
state = status;
|
||||
return status;
|
||||
result = status;
|
||||
}
|
||||
if (state == PermissionStatus.granted &&
|
||||
androidInfo.version.sdkInt >= 29) {
|
||||
result = await Permission.accessMediaLocation.status;
|
||||
}
|
||||
} else {
|
||||
// iOS can use photos
|
||||
final photos = await Permission.photos.status;
|
||||
state = photos;
|
||||
return photos;
|
||||
result = photos;
|
||||
}
|
||||
state = result;
|
||||
return result;
|
||||
}
|
||||
}
|
||||
|
||||
final galleryPermissionNotifier
|
||||
= StateNotifierProvider<GalleryPermissionNotifier, PermissionStatus>
|
||||
((ref) => GalleryPermissionNotifier());
|
||||
final galleryPermissionNotifier =
|
||||
StateNotifierProvider<GalleryPermissionNotifier, PermissionStatus>(
|
||||
(ref) => GalleryPermissionNotifier(),
|
||||
);
|
||||
|
||||
@@ -82,8 +82,12 @@ class AssetService {
|
||||
_db.writeTxn(() => _db.eTags.put(ETag(id: user.id, value: newETag)));
|
||||
}
|
||||
return assets.map(Asset.remote).toList();
|
||||
} catch (e, stack) {
|
||||
log.severe('Error while getting remote assets', e, stack);
|
||||
} catch (error, stack) {
|
||||
log.severe(
|
||||
'Error while getting remote assets: ${error.toString()}',
|
||||
error,
|
||||
stack,
|
||||
);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
@@ -100,8 +104,8 @@ class AssetService {
|
||||
|
||||
return await _apiService.assetApi
|
||||
.deleteAsset(DeleteAssetDto(ids: payload));
|
||||
} catch (e) {
|
||||
debugPrint("Error getAllAsset ${e.toString()}");
|
||||
} catch (error, stack) {
|
||||
log.severe("Error deleteAssets ${error.toString()}", error, stack);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -24,6 +24,10 @@ ThemeData base = ThemeData(
|
||||
chipTheme: const ChipThemeData(
|
||||
side: BorderSide.none,
|
||||
),
|
||||
sliderTheme: const SliderThemeData(
|
||||
thumbShape: RoundSliderThumbShape(enabledThumbRadius: 7),
|
||||
trackHeight: 2.0,
|
||||
),
|
||||
);
|
||||
|
||||
ThemeData immichLightTheme = ThemeData(
|
||||
@@ -99,6 +103,7 @@ ThemeData immichLightTheme = ThemeData(
|
||||
),
|
||||
),
|
||||
chipTheme: base.chipTheme,
|
||||
sliderTheme: base.sliderTheme,
|
||||
popupMenuTheme: const PopupMenuThemeData(
|
||||
shape: RoundedRectangleBorder(
|
||||
borderRadius: BorderRadius.all(Radius.circular(10)),
|
||||
@@ -218,6 +223,7 @@ ThemeData immichDarkTheme = ThemeData(
|
||||
),
|
||||
),
|
||||
chipTheme: base.chipTheme,
|
||||
sliderTheme: base.sliderTheme,
|
||||
popupMenuTheme: const PopupMenuThemeData(
|
||||
shape: RoundedRectangleBorder(
|
||||
borderRadius: BorderRadius.all(Radius.circular(10)),
|
||||
|
||||
2
mobile/openapi/README.md
generated
2
mobile/openapi/README.md
generated
@@ -3,7 +3,7 @@ Immich API
|
||||
|
||||
This Dart package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:
|
||||
|
||||
- API version: 1.63.2
|
||||
- API version: 1.64.0
|
||||
- Build package: org.openapitools.codegen.languages.DartClientCodegen
|
||||
|
||||
## Requirements
|
||||
|
||||
@@ -1409,7 +1409,7 @@ packages:
|
||||
source: hosted
|
||||
version: "11.3.0"
|
||||
wakelock:
|
||||
dependency: transitive
|
||||
dependency: "direct main"
|
||||
description:
|
||||
name: wakelock
|
||||
sha256: "769ecf42eb2d07128407b50cb93d7c10bd2ee48f0276ef0119db1d25cc2f87db"
|
||||
|
||||
@@ -2,7 +2,7 @@ name: immich_mobile
|
||||
description: Immich - selfhosted backup media file on mobile phone
|
||||
|
||||
publish_to: "none"
|
||||
version: 1.63.2+86
|
||||
version: 1.64.0+87
|
||||
isar_version: &isar_version 3.1.0+1
|
||||
|
||||
environment:
|
||||
@@ -47,6 +47,7 @@ dependencies:
|
||||
permission_handler: ^10.2.0
|
||||
device_info_plus: ^8.1.0
|
||||
crypto: ^3.0.3 # TODO remove once native crypto is used on iOS
|
||||
wakelock: ^0.6.2
|
||||
|
||||
openapi:
|
||||
path: openapi
|
||||
|
||||
@@ -4374,7 +4374,7 @@
|
||||
"info": {
|
||||
"title": "Immich",
|
||||
"description": "Immich API",
|
||||
"version": "1.63.2",
|
||||
"version": "1.64.0",
|
||||
"contact": {}
|
||||
},
|
||||
"tags": [],
|
||||
|
||||
4
server/package-lock.json
generated
4
server/package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "immich",
|
||||
"version": "1.63.2",
|
||||
"version": "1.64.0",
|
||||
"lockfileVersion": 2,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "immich",
|
||||
"version": "1.63.2",
|
||||
"version": "1.64.0",
|
||||
"license": "UNLICENSED",
|
||||
"dependencies": {
|
||||
"@babel/runtime": "^7.20.13",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "immich",
|
||||
"version": "1.63.2",
|
||||
"version": "1.64.0",
|
||||
"description": "",
|
||||
"author": "",
|
||||
"private": true,
|
||||
|
||||
@@ -67,6 +67,33 @@ const validMimeTypes = [
|
||||
'image/x-sony-arw',
|
||||
'image/x-sony-sr2',
|
||||
'image/x-sony-srf',
|
||||
'image/dng',
|
||||
'image/ari',
|
||||
'image/cr2',
|
||||
'image/cr3',
|
||||
'image/crw',
|
||||
'image/erf',
|
||||
'image/raf',
|
||||
'image/3fr',
|
||||
'image/fff',
|
||||
'image/dcr',
|
||||
'image/k25',
|
||||
'image/kdc',
|
||||
'image/rwl',
|
||||
'image/mrw',
|
||||
'image/nef',
|
||||
'image/orf',
|
||||
'image/ori',
|
||||
'image/raw',
|
||||
'image/pef',
|
||||
'image/cin',
|
||||
'image/cap',
|
||||
'image/iiq',
|
||||
'image/srw',
|
||||
'image/x3f',
|
||||
'image/arw',
|
||||
'image/sr2',
|
||||
'image/srf',
|
||||
'video/3gpp',
|
||||
'video/mp2t',
|
||||
'video/mp4',
|
||||
|
||||
@@ -87,6 +87,34 @@ describe('assetUploadOption', () => {
|
||||
{ mimetype: 'image/x-sony-arw', extension: 'arw' },
|
||||
{ mimetype: 'image/x-sony-sr2', extension: 'sr2' },
|
||||
{ mimetype: 'image/x-sony-srf', extension: 'srf' },
|
||||
|
||||
{ mimetype: 'image/dng', extension: 'dng' },
|
||||
{ mimetype: 'image/ari', extension: 'ari' },
|
||||
{ mimetype: 'image/cr2', extension: 'cr2' },
|
||||
{ mimetype: 'image/cr3', extension: 'cr3' },
|
||||
{ mimetype: 'image/crw', extension: 'crw' },
|
||||
{ mimetype: 'image/erf', extension: 'erf' },
|
||||
{ mimetype: 'image/raf', extension: 'raf' },
|
||||
{ mimetype: 'image/3fr', extension: '3fr' },
|
||||
{ mimetype: 'image/fff', extension: 'fff' },
|
||||
{ mimetype: 'image/dcr', extension: 'dcr' },
|
||||
{ mimetype: 'image/k25', extension: 'k25' },
|
||||
{ mimetype: 'image/kdc', extension: 'kdc' },
|
||||
{ mimetype: 'image/rwl', extension: 'rwl' },
|
||||
{ mimetype: 'image/mrw', extension: 'mrw' },
|
||||
{ mimetype: 'image/nef', extension: 'nef' },
|
||||
{ mimetype: 'image/orf', extension: 'orf' },
|
||||
{ mimetype: 'image/ori', extension: 'ori' },
|
||||
{ mimetype: 'image/raw', extension: 'raw' },
|
||||
{ mimetype: 'image/pef', extension: 'pef' },
|
||||
{ mimetype: 'image/cin', extension: 'cin' },
|
||||
{ mimetype: 'image/cap', extension: 'cap' },
|
||||
{ mimetype: 'image/iiq', extension: 'iiq' },
|
||||
{ mimetype: 'image/srw', extension: 'srw' },
|
||||
{ mimetype: 'image/x3f', extension: 'x3f' },
|
||||
{ mimetype: 'image/arw', extension: 'arw' },
|
||||
{ mimetype: 'image/sr2', extension: 'sr2' },
|
||||
{ mimetype: 'image/srf', extension: 'srf' },
|
||||
{ mimetype: 'video/3gpp', extension: '3gp' },
|
||||
{ mimetype: 'video/mp2t', extension: 'm2ts' },
|
||||
{ mimetype: 'video/mp2t', extension: 'mts' },
|
||||
|
||||
2
web/package-lock.json
generated
2
web/package-lock.json
generated
@@ -14,7 +14,7 @@
|
||||
"copy-image-clipboard": "^2.1.2",
|
||||
"handlebars": "^4.7.7",
|
||||
"justified-layout": "^4.1.0",
|
||||
"leaflet": "^1.9.3",
|
||||
"leaflet": "^1.9.4",
|
||||
"leaflet.markercluster": "^1.5.3",
|
||||
"lodash-es": "^4.17.21",
|
||||
"luxon": "^3.2.1",
|
||||
|
||||
@@ -64,7 +64,7 @@
|
||||
"copy-image-clipboard": "^2.1.2",
|
||||
"handlebars": "^4.7.7",
|
||||
"justified-layout": "^4.1.0",
|
||||
"leaflet": "^1.9.3",
|
||||
"leaflet": "^1.9.4",
|
||||
"leaflet.markercluster": "^1.5.3",
|
||||
"lodash-es": "^4.17.21",
|
||||
"luxon": "^3.2.1",
|
||||
|
||||
2
web/src/api/open-api/api.ts
generated
2
web/src/api/open-api/api.ts
generated
@@ -4,7 +4,7 @@
|
||||
* Immich
|
||||
* Immich API
|
||||
*
|
||||
* The version of the OpenAPI document: 1.63.2
|
||||
* The version of the OpenAPI document: 1.64.0
|
||||
*
|
||||
*
|
||||
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
||||
|
||||
2
web/src/api/open-api/base.ts
generated
2
web/src/api/open-api/base.ts
generated
@@ -4,7 +4,7 @@
|
||||
* Immich
|
||||
* Immich API
|
||||
*
|
||||
* The version of the OpenAPI document: 1.63.2
|
||||
* The version of the OpenAPI document: 1.64.0
|
||||
*
|
||||
*
|
||||
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
||||
|
||||
2
web/src/api/open-api/common.ts
generated
2
web/src/api/open-api/common.ts
generated
@@ -4,7 +4,7 @@
|
||||
* Immich
|
||||
* Immich API
|
||||
*
|
||||
* The version of the OpenAPI document: 1.63.2
|
||||
* The version of the OpenAPI document: 1.64.0
|
||||
*
|
||||
*
|
||||
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
||||
|
||||
2
web/src/api/open-api/configuration.ts
generated
2
web/src/api/open-api/configuration.ts
generated
@@ -4,7 +4,7 @@
|
||||
* Immich
|
||||
* Immich API
|
||||
*
|
||||
* The version of the OpenAPI document: 1.63.2
|
||||
* The version of the OpenAPI document: 1.64.0
|
||||
*
|
||||
*
|
||||
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
||||
|
||||
2
web/src/api/open-api/index.ts
generated
2
web/src/api/open-api/index.ts
generated
@@ -4,7 +4,7 @@
|
||||
* Immich
|
||||
* Immich API
|
||||
*
|
||||
* The version of the OpenAPI document: 1.63.2
|
||||
* The version of the OpenAPI document: 1.64.0
|
||||
*
|
||||
*
|
||||
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
||||
|
||||
@@ -246,7 +246,7 @@
|
||||
{#await import('../shared-components/leaflet') then { Map, TileLayer, Marker }}
|
||||
<Map center={latlng} zoom={14}>
|
||||
<TileLayer
|
||||
urlTemplate={'https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png'}
|
||||
urlTemplate={'https://tile.openstreetmap.org/{z}/{x}/{y}.png'}
|
||||
options={{
|
||||
attribution:
|
||||
'© <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a>'
|
||||
|
||||
@@ -37,3 +37,11 @@ export const mapSettings = persisted<MapSettings>('map-settings', {
|
||||
});
|
||||
|
||||
export const videoViewerVolume = persisted<number>('video-viewer-volume', 1, {});
|
||||
|
||||
export interface AlbumViewSettings {
|
||||
sortBy: string;
|
||||
}
|
||||
|
||||
export const albumViewSettings = persisted<AlbumViewSettings>('album-view-settings', {
|
||||
sortBy: 'Most recent photo'
|
||||
});
|
||||
|
||||
@@ -98,6 +98,35 @@ describe('get file mime type', () => {
|
||||
{ mimetype: 'image/x-sony-arw', extension: 'arw' },
|
||||
{ mimetype: 'image/x-sony-sr2', extension: 'sr2' },
|
||||
{ mimetype: 'image/x-sony-srf', extension: 'srf' },
|
||||
/*** The following MIME types are allowed for upload but not returned by getFileMimeType() ***
|
||||
{ mimetype: 'image/dng', extension: 'dng' },
|
||||
{ mimetype: 'image/ari', extension: 'ari' },
|
||||
{ mimetype: 'image/cr2', extension: 'cr2' },
|
||||
{ mimetype: 'image/cr3', extension: 'cr3' },
|
||||
{ mimetype: 'image/crw', extension: 'crw' },
|
||||
{ mimetype: 'image/erf', extension: 'erf' },
|
||||
{ mimetype: 'image/raf', extension: 'raf' },
|
||||
{ mimetype: 'image/3fr', extension: '3fr' },
|
||||
{ mimetype: 'image/fff', extension: 'fff' },
|
||||
{ mimetype: 'image/dcr', extension: 'dcr' },
|
||||
{ mimetype: 'image/k25', extension: 'k25' },
|
||||
{ mimetype: 'image/kdc', extension: 'kdc' },
|
||||
{ mimetype: 'image/rwl', extension: 'rwl' },
|
||||
{ mimetype: 'image/mrw', extension: 'mrw' },
|
||||
{ mimetype: 'image/nef', extension: 'nef' },
|
||||
{ mimetype: 'image/orf', extension: 'orf' },
|
||||
{ mimetype: 'image/ori', extension: 'ori' },
|
||||
{ mimetype: 'image/raw', extension: 'raw' },
|
||||
{ mimetype: 'image/pef', extension: 'pef' },
|
||||
{ mimetype: 'image/cin', extension: 'cin' },
|
||||
{ mimetype: 'image/cap', extension: 'cap' },
|
||||
{ mimetype: 'image/iiq', extension: 'iiq' },
|
||||
{ mimetype: 'image/srw', extension: 'srw' },
|
||||
{ mimetype: 'image/x3f', extension: 'x3f' },
|
||||
{ mimetype: 'image/arw', extension: 'arw' },
|
||||
{ mimetype: 'image/sr2', extension: 'sr2' },
|
||||
{ mimetype: 'image/srf', extension: 'srf' },
|
||||
**/
|
||||
{ mimetype: 'video/3gpp', extension: '3gp' },
|
||||
{ mimetype: 'video/mp2t', extension: 'm2ts' },
|
||||
{ mimetype: 'video/mp2t', extension: 'mts' },
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
<script lang="ts">
|
||||
import { albumViewSettings } from '$lib/stores/preferences.store';
|
||||
import AlbumCard from '$lib/components/album-page/album-card.svelte';
|
||||
import { goto } from '$app/navigation';
|
||||
import ContextMenu from '$lib/components/shared-components/context-menu/context-menu.svelte';
|
||||
@@ -17,7 +18,6 @@
|
||||
export let data: PageData;
|
||||
|
||||
const sortByOptions = ['Most recent photo', 'Last modified', 'Album title'];
|
||||
let selectedSortBy = sortByOptions[0];
|
||||
|
||||
const {
|
||||
albums: unsortedAlbums,
|
||||
@@ -39,15 +39,16 @@
|
||||
};
|
||||
|
||||
$: {
|
||||
if (selectedSortBy === 'Most recent photo') {
|
||||
const { sortBy } = $albumViewSettings;
|
||||
if (sortBy === 'Most recent photo') {
|
||||
$albums = $unsortedAlbums.sort((a, b) =>
|
||||
a.lastModifiedAssetTimestamp && b.lastModifiedAssetTimestamp
|
||||
? sortByDate(a.lastModifiedAssetTimestamp, b.lastModifiedAssetTimestamp)
|
||||
: sortByDate(a.updatedAt, b.updatedAt)
|
||||
);
|
||||
} else if (selectedSortBy === 'Last modified') {
|
||||
} else if (sortBy === 'Last modified') {
|
||||
$albums = $unsortedAlbums.sort((a, b) => sortByDate(a.updatedAt, b.updatedAt));
|
||||
} else if (selectedSortBy === 'Album title') {
|
||||
} else if (sortBy === 'Album title') {
|
||||
$albums = $unsortedAlbums.sort((a, b) => a.albumName.localeCompare(b.albumName));
|
||||
}
|
||||
}
|
||||
@@ -86,7 +87,7 @@
|
||||
</div>
|
||||
</LinkButton>
|
||||
|
||||
<Dropdown options={sortByOptions} bind:value={selectedSortBy} />
|
||||
<Dropdown options={sortByOptions} bind:value={$albumViewSettings.sortBy} />
|
||||
</div>
|
||||
|
||||
<!-- Album Card -->
|
||||
|
||||
@@ -118,7 +118,7 @@
|
||||
}}
|
||||
>
|
||||
<TileLayer
|
||||
urlTemplate={'https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png'}
|
||||
urlTemplate={'https://tile.openstreetmap.org/{z}/{x}/{y}.png'}
|
||||
options={{
|
||||
attribution:
|
||||
'© <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a>'
|
||||
|
||||
Reference in New Issue
Block a user