be aware that our image optimizations ( resizer, responsive) will not apply for images coming from external url, also you will need a cdn on top of a storage
I am not aware what contentcredentials.org is and what it requires for its function, maybe its something small and we just need to enable it
not related to OP question, but I noticed images I dropped in WS assets keep their original size (when I check asset details view), @Oleg Isonen is it normal? is optimization done when images are served to the website?
exactly, they are optimized on the fly depending on user's browser, resolution, operating system etc
it serves avif in on case, webp in another case
and we currently can't allow this for any random url, otherwise someone could seriously damage us
Here are the implementation details of getting content credentials working:
https://opensource.contentauthenticity.org/docs/introductionIt's part of an open source project lead by Adobe. The idea with this is that any information associated with how the image came to be is securely stored on the image.
I wouldn't expect that the images would be optimized by Webstudio in this case. Atleast based on my initial understanding, it requires the image to be un-altered to be able to work. There might be ways to have an optimized image be served and details of the image be served from the original asset, but not sure about it yet
I am guessing its just a meta information inside the image, which can be added or kept in there after optimizing, but it would need to be added to cloudfare's image optimizer service we are usign
I am also wondering if the project is not too young and if it has technical ground
I haven't read through it, but I have questions
e.g. is there a way to securely have that euthenticity check or is this gonna be broken easily
is this an actual protection or just an info
what concrete cases is this actually solving and is it actually solving them
TLDR: They want to promote transparency in digital content through this.
At capture: Each asset is cryptographically hashed and signed to capture a verifiable, tamper-evident record, allowing changes to the asset or its metadata to be detected and exposed. Creators can choose to attach attribution information and usage signals directly to their assets.
Editing: The tools would help creators verifiably capture the types of edits used to produce their content and whether generative AI was used in the process, and share that information with their audiences. Proactively and transparently sharing this information can help to increase feelings of trust and authenticity around a creator’s work.
Publishing and sharing: The tools would make it easy for any website to support the native display of compliant digital provenance information when it is available for published content. This allows site visitors to easily identify when this information is available to them, and inspect it for themselves.
if they need the full original image to verify it, then you need to basically link the image to the source and let the verification use the source
otherwise the tool that is optimizing the images would have to do that in a way that is still verifiable or something
my head is spinning, this is way out of our current roadmap
just linking images isn't super helpful because you can show one image that is fake and link it to another image that ia real, so to verify the image I am seeing the image itself needs to be verifiable
that means verification needs to be part of optimization
Yeah. While working with CMS platforms, I have noticed that they will strip off all the meta data to optimize images and make this impossible. Thus the easiest way around it is using separate hosting for images in full size.
I don't think this is the solution, you are trying to just serve the original image without the optimization to make it verifiable
you will loose optimizations this way and make your site slow
this is not the right solution
the right solution is to truly understand how verification works and how it has to be integrated into the optimmization process
I am actively working with these folks on a few design related things.
It is possible to optimize the image before serving but then the image has to be "signed" by the server indicating the image has been updated.
and it has to be signed every time it is resized or optimized, services like above only hold in cache images for a period of time and if they are not used, they get deleted
so signing needs to happen when optimization happens every time
I am not sure how CPU intense the process is and how the adoption of this will go, I have my doubts
generally I don't see what incentives need to be in place so that this can achieve general adoption
currently the value is low, but maybe in the future idk
their best chance is to work with cloudflare and other services to incorporate signing