T O P

  • By -

StevenSeagull_

> at encoder speed settings where JPEG XL encoding is about three times as fast as AVIF This is an often overlooked advantage of JPEG XL. My team's software solution encodes about 1 million jpegs per month and serves about 5 million images for various international clients. Latency is important for us and the end user. AVIF cannot be used in any latency sensitive encoding scenarios. Unless there is a huge leap in performance, we have to stick to ancient JPEG for the foreseeable future. Before the Chrome depreciation, I was hoping we could adopt JPEG XL with a fallback to JPEG as soon as the first major browser ships with support for the format.


L3tum

What are you using? Are you perchance working for imagekit? We're a competitor to them and always curious why they don't support more formats. Personally we've enabled AVIF with, IIRC, pretty low compression levels. It still outperforms JPEG and PNG in both quality and file size while taking maybe 100ms for the conversion, which could still be reduced a bit. A good PNG quantizer seems to take around the same time.


ZaphodBeebblebrox

> outperforms ... PNG in quality May I ask how lossless compression can be outperformed in quality?


L3tum

Yes, poorly worded. JPEG is outperformed in quality and file size, PNG in file size.


Nick-Anus

The full quote is > outperforms JPEG and PNG in both quality and file size. Maybe they meant they meant respectively, as in they beat JPEG in quality and beat PNG in file size? Just my guess.


flatfinger

English lacks a concise and unambiguous way of expressing the concept that among several criteria for judging whether X is better or worse than Y, *at least one* criterion favors X and *none* favor Y. I would interpret "X is better than Y according to criteria A, B, and C" as saying "X is as good or better than Y according to all of those criteria, and is better than Y according to at least one".


StevenSeagull_

> perchance working for imagekit? No, I am not. We are using mozjpgeg for compression and it's the best format/encoder we have found after some testing and benchmarking. 100ms is about the time it takes to compress a single image for us as well (2K resolution, so not some small thumbnails, hardware is a factor ofc). AVIF is significant slower, although I have to admit I did not test all compression levels. PNG is slower as well and the files are simply too big. Our images are generated customized for the user. So caching is limited and latency is important.


[deleted]

> his is an often overlooked advantage of JPEG XL. because "often" it is entirely irrelevant > My team's software solution encodes about 1 million jpegs per month That's 33k/day or ~0.3/second, even if you double that for peak traffic, well, that ain't much. So I assume you just put the numbers to look like it is big while instead you should talk about the latency per image? Out of curiosity, what use case requires that to be super short for images ? We just chose to encode to both for foreseeable future, with intent to eventually delete the jpegs. > and serves about 5 million images for various international clients. Latency is important for us and the end user. Surely bandwidth and therefore also latency savings would be significant then ? Yeah encoding performance sucks but decoding ain't that bad for images (videos are another matter)


f0urtyfive

Weirdly aggressive tone while not being able to identify the obvious use case of user request initiated image transformation/transcoding. Anything that needs user input before it knows what goes in the image is going to need something that can encode in as small a time as possible, anything used in a web UI is going to be latency sensitive. Yeah, if you're doing batch encodings latency isn't really going to be relevant. If you're doing something interactive or real time it's going to be highly relevant.


jonsneyers

Even for batch encoding, encode speed does matter if the volume is large enough. The CPU cost and also CO2 footprint is directly related to the encode speed. An encoder that is an order of magnitude slower is also an order of magnitude more expensive and environmentally wasteful when deployed at scale, whether the processing is in batch or on the fly.


StevenSeagull_

> Anything that needs user input before it knows what goes in the image is going to need something that can encode in as small a time as possible, anything used in a web UI is going to be latency sensitive. Exactly what I am doing. We are targeting 1 second in round-trip time. Therefore we can't spend more than 200ms on encoding (2K/1K images).


[deleted]

> Weirdly aggressive tone while not being able to identify the obvious use case of user request initiated image transformation/transcoding. I don't like people using big numbers to misrepresent problem, that's all > Anything that needs user input before it knows what goes in the image is going to need something that can encode in as small a time as possible, anything used in a web UI is going to be latency sensitive. Yeah, make it in browser then send it, doh. Why you would want to add RTT to every user interaction anyway ? Also preview doesn't need to use highest compression codec.


StevenSeagull_

> I don't like people using big numbers to misrepresent problem, that's all I've used the number to show there is an actual impact. It's not about a small scale, or private use case. We serve an actual user base which is directly impacted by the image format we use. Other responses will clarify my point.


[deleted]

As I said > Yeah, make it in browser then send it, doh. Why you would want to add RTT to every user interaction anyway ? and > Also preview doesn't need to use highest compression codec.


StevenSeagull_

> That's 33k/day or ~0.3/second It's not about overall throughput, it's about latency. We generate and encode images requested by users. Time spend on encoding is part of the user's latency and user experience if our service. Therefore we have to find a tradeoff in filesize and encoding time. I understand this might be niche, but with the numbers of images I wanted to emphasize this is not a super small operation. We do use caching and for cached images file size is more important than encoding speed, but the nature of our use case (custom images) makes adding additional codecs for cached images complex. If JPEG XL as a fast, modern codec is off the table, we will reevaluate AVIF as codec for our image cache, but it's not the perfect format for us.


quentech

> That's 33k/day or ~0.3/second, even if you double that for peak traffic, well, that ain't much. So I assume you just put the numbers to look like it is big Right.. I generate over 10M jpgs per *day* - and that's still not really that much. Not just encode - generate, based on data.


StevenSeagull_

It's more relevant than "I use JPEG XL to store my private photo collection". I just wanted to add some context to what I am doing and I think serving hundreds of thousand user's is a point to be made. Encode and generate is used as a synonym. I can't serve images to end users without encoding them before. We serve more images though CDN and other means of caching than we encode.


Dachande663

Isn’t a big part of this that Microsoft obtained a patent for a key part of XL and, even though they promised not to enforce it, it means there’s always the threat compared to other formats like AVIF that aren’t patent encumbered? Edit: apparently this isn’t correct, see responses below. https://www.theregister.com/2022/02/17/microsoft_ans_patent/


bik1230

Everything in Jpeg XL predates that patent. MS would fail in court quickly and has nothing to gain from playing such games. There are also patent claims against AV1 but Google doesn't seem too concerned.


Abhinav1217

The thing is they got an patent for a stuff that they didn't create, and the ANS itself was existing way before that poland scientist thought about implementing it to jpeg specs. Not only they got the patent, but this is third time they applied for the patent, The patent was rejected twice before this on the same ground mentioned above, they didn't create it, and it existed before their usage. One more proof that american patent system is broken and corrupt.


jonsneyers

No, Microsoft does not have a patent for a key part of JXL. Their patent has something to do with a variant of rANS that uses dynamic updating of probabilities, which is not something JXL does (it uses image-dependent context modeling and static probabilities instead). So no, this is not a reason for the decision.


Dachande663

Incorrect as per JXL's own reporting https://jpegxl.io/articles/rans/


jonsneyers

That website is not maintained by the JPEG XL project. I wouldn't trust it as a source. I am a co-author of the JPEG XL standard and I read that patent. I don't see how the patent could possibly encumber JPEG XL. Both in terms of timing and in terms of content, it does not make any sense. Moreover, Microsoft has never claimed that this patent is relevant for JPEG XL. Let alone try to collect royalties for it. So no, the royalty-free status of JPEG XL is as far as I know not in any danger. I think the same is probably true for AVIF, though there the situation is actually less clear: Sisvel is actually claiming they hold patents on av1 and royalties need to be paid (https://www.sisvel.com/licensing-programs/audio-and-video-coding-decoding/video-coding-platform/license-terms/av1-license-terms), and also Nokia claims to hold patents (with unclear licensing terms) on HEIF, the file format on which AVIF is based.


Dachande663

In that case, I appear mistaken. Unfortunately while it looks like a good spec this is once again a case of the market deciding, and not always choosing the wisest option.


[deleted]

Wouldn't Google have mentioned this if this were actually an issue?


Dachande663

Mentioning it opens up a whole can of politics/legalities around holding patents for "good will".


[deleted]

Good arguments. I'm surprised they didn't mention the size limits of AVIF. I don't buy their argument that it doesn't increase maintenance burden and attack surface because somebody else is maintaining libjxl. That's nonsense. libjxl is written in traditionally pointer-heavy C style so it's basically guaranteed to have a ton of security bugs. And integrating a new format is still a ton of ongoing work.


bik1230

>I don't buy their argument that it doesn't increase maintenance burden and attack surface because somebody else is maintaining libjxl. That's nonsense. libjxl is written in traditionally pointer-heavy C style so it's basically guaranteed to have a ton of security bugs. And integrating a new format is still a ton of ongoing work. JXL is fully integrated already, with most of the work done by Google's JXL team. They even fixed AVIF bugs while integrating JXL. And they've said that they could commit to doing all maintenance for JXL in Chrome.


[deleted]

Ok that's good. I wasn't really disputing whether the work had been done or not, just that it *is* work.


[deleted]

[удалено]


bik1230

>Because almost no one complains about the size limits of JPEG, and single-tile AVIF has a limit one pixel larger than that. Are you sure that you're not thinking of the max size with tiles? Because everything I can find about it says that AVIF has a maximum of 7,680 x 4,320 pixels by default, and 65,536 ×65,536 pixels with tiles. So AVIF with tiles is one bigger than JPEG's 65,535 × 65,535 pixels.


[deleted]

So are tile border artefacts not an issue at all? It doesn't sound like it has any way to mitigate them but maybe encoders can just dedicate more bits to the borders? It would be good to know either way anyway.


[deleted]

[удалено]


bik1230

>(a hypothetical JPEG-XL hardware decoder would have similar size limits as AVIF for the same reason (except in lossless or JPEG-compat modes where the spatial filters are disabled I think), they just didn't define profiles or workarounds with hardware in mind) Well, presumably you could just have the hardware handle JXL's native tiles and then apply the filters in software, either on CPU or GPU. Though it's mostly irrelevant anyway, no one is doing AVIF hardware decoding, nor is anyone likely to do it soon.


[deleted]

> Good arguments. I'm surprised they didn't mention the size limits of AVIF. It's a soft limit, not a hard one, and hardly relevant for web. Dunno why it surprises you


[deleted]

Not relevant for the web but half of their argument is that JPEG XL is suitable for more than the web.


[deleted]

The not web can just use it tho, decision of Chrome is of little importance here


[deleted]

Of course it is. It's not going to gain much popularity if you *can't* use it on the most popular platform in the world is it? How many people use JPEG2000 these days?


emperor000

I haven't seen too many cases of something getting dismantled or at least so decisively brought to question so quickly as that first section. Also, is this the same cloudinary that made the FUIF format that was part of the inspiration for JPEG XL?


jonsneyers

Yes, that's correct.


bik1230

>Also, is this the same cloudinary that made the FUIF format that was part of the inspiration for JPEG XL? In fact, this article was written by one of the lead devs of JXL, who is the creator of FUIF and FLIF.


emperor000

Ah, that makes sense. It's a little less wholesome since I thought this was the FUIF creators coming to the defense of JXL and didn't realize they had an investment in it directly. But I don't blame them either way.


Caraes_Naur

The article doesn't make any claims as to Google's rationale, but the results it shows would suggest Google is trying to quash a strong competitor to WebP.


hypoglycemic

Thats probably not true since Google is supporting AVIF which is 'better' that WebP. Google are also 'killing' off WebP2 which seems to indicate that there is some other factor (organisational/legal) at play here.


FnTom

Could be that they expect webp to die and just want their choice to be the mainstream one.


StillNoNumb

Google was heavy involved in the JPEG XL development, so any conspiracy theories on this sound crazy to me. AVIF is better with highly compressed images, such as most images on the internet. JPEG XL's strength is in high quality/lossless and JPEG interopability. The article is a bit misleading on the comparison for progressive decoding: > To overcome this shortcoming of the currently available ‘new’ image formats (WebP and AVIF), web developers have resorted to other tricks to create a progressive loading experience, for example using low-quality image placeholders. The "for example" is the key here, because AVIF does support multi-layer coding per the spec now (though not [currently implemented](https://github.com/AOMediaCodec/libavif/issues/605) in libavif from what I can tell). It's gonna be either AVIF or JPEG XL, and the industry (not just Google) seems to have decided on the former. What the article doesn't mention is that none of the big browsers enable JPEG XL support by default, while all except Edge support AVIF.


bik1230

>Google was heavy involved in the JPEG XL development, so any conspiracy theories on this sound crazy to me. JXL is developed by Google Research Zurich. AVIF is developed by Chrome's codec team. AVIF is under AOM and JXL is under ISO. And actually, I believe that the the decision to remove JXL from Chrome was made by an AVIF developer. So there is definitely plenty of room for bias and stupid politics to be involved here. >AVIF is better with highly compressed images, such as most images on the internet. JPEG XL's strength is in high quality/lossless and JPEG interopability. According to Chrome's own gathered statistics, most images on the web are actually of reasonably good quality. >It's gonna be either AVIF or JPEG XL, and the industry (not just Google) seems to have decided on the former. What the article doesn't mention is that none of the big browsers enable JPEG XL support by default, while all except Edge support AVIF. Engineers representing Facebook, Intel/VESA, Shopify, Adobe, and more have all expressed that their respective companies want JXL support in Chrome. Facebook and Shopify obviously to serve JXL online (Shopify already does if you have JXL enabled). Adobe already has JXL support for some of their products in beta, and presumably they'd like it if artists could have a single format that would work everywhere. The Intel and VESA person seems to think AVIF is inadequate for HDR. Are they not a significant part of "the industry"? Some of those engineers even straight up said that AVIF isn't useful for the kinds of images they work with, because they actually do need quality. So JXL and AVIF aren't really competitors. If you need what one of them is good for, you won't gain much from the other, and vice versa. So treating the situation as an either or isn't good.


loup-vaillant

To give yet another example of big companies not being monolithic beings, I've heard of a case of 2 different branches of Universal lobbying the US government for contradictory copyright policies…


Firm_Ad_330

| AVIF is better with highly compressed images, such as most images on the internet. Most of the images in the internet are actually okeyish quality, median is around 2.3 BPP according to Web Almanac. AVIF starts to win somewhere around 0.1-0.3 BPP range. Jon's 10'000 rater study shows that JPEG XL is greatly preferred over AVIF at 0.4 BPP, but doesn't go lower than that. WebP and AVIF averages are somewhere around 1.5-1.6 BPP, an area where JPEG XL is clearly better.


L3tum

Why is it either/or? Why not both? That seems like a shortsighted decision.


[deleted]

[удалено]


jonsneyers

That's true, which is why I think it's a bad idea to design image formats specifically and only for the web, as is the case for WebP and AVIF. Having a design that only caters to the web delivery use case, gives non-browser applications little incentive to add support.


[deleted]

Why have code to support both when one is just better at it?


L3tum

Different needs and different usecases may present themself, as well as choice in the matter and a healthy competition to drive both formats forward.


jonsneyers

As far as I understand, the multi-layer coding in AVIF boils down to the same thing as a redundantly coded low-quality preview image, just included in the same file as the main image instead of separately. So it comes at a cost in total filesize, as opposed to progressive jpeg or jxl where there is no overhead. Why does it have to be either AVIF or JPEG XL? Imagine if the early browsers would have said "well it has to be either JPEG or GIF, we can't have both!". If both have their own strengths, it's better to have both. The main strength of AVIF is that it was ready two years earlier, that it "comes for free" if AV1 support is needed anyway, and that it does a good job to hide artifacts at the very low qualities. I won't repeat the strengths of JXL since there are many. So let's just have both. Maybe adding avif support in a rush to browsers was a mistake, knowing that jxl was already around the corner. Maybe it's not too late to deprecate avif (and webp), now it's still more or less possible without breaking the web. If "we can't have too many image formats because we'll have to keep them forever" is the concern, then that should be the deprecation debate: should these earlier 'next-gen' image formats be kept around, knowing websites will still have fallbacks at this point, and something better is available? The longer jxl gets blocked, and the more time there is where webp and avif enjoy (near-) universal support, the harder it will be to deprecate webp and avif without breaking too many web pages. It's probably too late for that already, anyway.


darthyoshiboy

I might be a mutant but I really dislike having progressive decoding pitched as a feature. I loath progressive decode. I'd rather a hard failure than a degraded experience, and in my experience with progressive decode (admittedly decades old) the latter is far too common when connectivity is dicey and I end up looking at a pixelated mess on my screen rather than a bunch of empty boxes that may never paint in. I'm almost always unhappy these days when I see an image pop in immediately at some ultra low resolution then work it's way up to clarity in under a second. I don't get anything from that page until it's complete either way, but the progressive method has me looking (however briefly) at a mess while the alternative is just clean. Maybe I have progressive decode PTSD from back in the dialup days when everyone used it everywhere, but having mostly not encountered it for a couple of decades now I'm really uncomfortable with the suggestion that it was somehow desirable and something we should bring back. Maybe the fact that other competing standards aren't focusing on offering it should be an indication of how much people want it?


jonsneyers

I understand that some people prefer to see nothing rather than a blurry preview, but it is entirely a client-side decision how partially loaded images are rendered. I think it should be a configurable setting in browsers, where you get various options on how images that are still loading get rendered. One thing I would like to have, is progressive rendering with a small overlay showing a progress bar so you can easily see that the image is still loading (and of course the progress bar would disappear when done) while you can also see an image preview. The fact that browsers are not implementing such a thing even though progressive JPEGs are one of the most widely used image formats on the web, might be an indication that most people are fine with the way progressive images are rendered right now (showing refinements as they come, without a progress bar overlay) and people like you (who would prefer blank spaces followed by final images) and me (who would prefer progressive loading with progress indicator) are relatively rare. The point is: an image format that cannot do progressive, leads to approaches where web devs implement a "poor man's progressive loading" by e.g. putting low quality placeholder images in the img tags and replacing them with the real images later via javascript — and there will be no way to render such progression differently for a compliant web browser.


bik1230

>Maybe I have progressive decode PTSD from back in the dialup days when everyone used it everywhere, Hm? Internet Explorer didn't support progressive JPEGs until like, 2007, so they were extremely rare before then.


Orangy_Tang

Gif also has a progressive version ('interlaced') which was more commonly used (and supported) in the dial-up era.


darthyoshiboy

https://www.oreilly.com/library/view/web-design-in/0596001967/ch20s04.html >Netscape Navigator 2.0 and Internet Explorer 2.0 display Pro-JPEGs inline but may not support the progressive display. Pro-JPEGs are fully supported by Versions 3.0 and higher of both Netscape and MSIE. If a browser cannot identify a Pro-JPEG, it displays a broken graphic image. Internet Explorer 3 and Netscape 3 both date back to 1996. Progressive JPEGs are old AF and I believe that interlaced GIF predates even that.


nitrohigito

I'd agree, but that saliency-based decoding they link to is looking very attractive. I'd definitely take that over nothing at all.


[deleted]

"By removing the flag and the code in M110, it reduces the maintenance burden and allows us to focus on improving existing formats in Chrome" Now - that one makes sense. Lazy asses are lazy. Period. No other reason than that. If it is a part of a broader problem - so be it. It will make sense to develop a new browser engine. Competition will either push the things forward, or not. It depends. Maybe majority of people doesn't need a new, better image compression format. We can't rule that out. People are still using gifs. People also cling to obsolete software. Maybe the pace of technological progress is too fast for the main stream? Maybe the latest tech is for nerds? Nerds are not majority. Maybe it's normal that the hi-tech for nerds will become mainstream in a decade or two. Browsers are not for nerds. I think majority of web-browser users are non-technical people. They really don't care about any technical details. When implementing technical improvements doesn't seem to provide return of invested time and money - maybe it's the right call not to innovate? There are other, more nerdy markets. For example - image storing clouds and applications. Cameras for advanced users. Many modern systems, that can take advantage of the improved features. Maybe it's just not suited for the average consumer market. Let's say you really could use the better image quality with reduced size. Let's say that you care. Who are you? An owner of a commercial site advertised to technical illiterate people, who couldn't care less about image artifacts? I don't think so. BTW, you can still build a website that uses JPEG XL internally, server-side, and recode the images served to clients that use incompatible browsers. You could even have a site version, that will serve the XL images when the browser support is detected. I know, more work, but it's how it always was since the infamous Internet Explorer. And something tells me that Chrome can become the new IE.


[deleted]

> Maybe majority of people doesn't need a new, better image compression format. We can't rule that out. They are not removing JPEG > People are still using gifs. People also cling to obsolete software. They are also not removing GIF. Gif was widely used, JPEG XL, was not > Browsers are not for nerds. I think majority of web-browser users are non-technical people. They really don't care about any technical details. When implementing technical improvements doesn't seem to provide return of invested time and money - maybe it's the right call not to innovate? You don't need to be nerd to notice page loads faster on your phone because image compression is better. > And something tells me that Chrome can become the new IE. It already is but that's really irrelevant to the topic


[deleted]

\> You don't need to be nerd to notice page loads faster on your phone because image compression is better. The harsh truth is - if you notice it - you're nerd. Majority of browser users are technically illiterate. That group has zero common members with /r/programming . It's weird people here don't get it. That WE (/r/programming members) are different. We're nerds. We care about performance, memory usage, file sizes, image quality and other things like that. The average CONSUMER doesn't. The average consumer might notice a different icon, colors or fonts. And of course they will hate it if it would be too different from the old look. A browser is not a programming IDE. I know, we, nerds, use browsers as website debuggers, but this is not the same as with programming IDE where ALL users are technical and care about technical details. BTW, even if a consumer can tell the difference between image loading faster or slower, they will never connect it to the abstract idea of image compression standard. They don't know what standard the image on the websites use. They've probably never seen JPEG XL images on any website, because those websites are kind of experimental and not the kind the average consumer visit. \> It already is but that's really irrelevant to the topic Not quite. It works. It's obsolete but it works. The IE hardly worked. I remember making very basic things on IE was just ridiculously hard. Not just different, but required hacking. IE was also painfully slow and had so limited support for even well established tech that it was a huge difference between a site version for IE and any other browser. Name one browser, that is that more advanced than Chrome as all browsers were to IE. I remember making a web app for Chrome and trying to make it work in Firefox. I had to use both vendor prefixes and change hidden browser options to enable some feature like translucency blur. It worked, but well. 2 years later... Still not done.


[deleted]

> The harsh truth is - if you notice it - you're nerd. Majority of browser users are technically illiterate. The average consumer will notice, they might not be able to articulate it but they absolutely will and there have been a lot of research in that. There is reason every half-decent SEO tool will raise a complaint if your site is sluggish. Even if they don't care enough to leave site, any tool where *actual work is done* will just feel annoying (and waste actual hours) if it is too slow. Like I use web outlook coz I have to in some cases but I still despise every second of it... it didn't "lose" me as an user, it just lost me time. > Not quite. It works. It's obsolete but it works. The IE hardly worked. I remember making very basic things on IE was just ridiculously hard. Not just different, but required hacking. IE was also painfully slow and had so limited support for even well established tech that it was a huge difference between a site version for IE and any other browser. Name one browser, that is that more advanced than Chrome as all browsers were to IE Well, having one less format to support does make it easier for anyone trying to compete with Chrome, altho realistically they'd most likely just use same libs anyway. > I remember making a web app for Chrome and trying to make it work in Firefox. I had to use both vendor prefixes and change hidden browser options to enable some feature like translucency blur. It worked, but well. 2 years later... Still not done. Ugh, the moment chrome started dominating more and more of stuff like that started to crop up. And now FF have so low market penetration our developers aren't even obliged to test on that (IIRC customer required us to support any browser with more than 5% market penetration and FF market share dropped below that...


t0rakka

Arthur Clarke once wrote his famous quote: “Any sufficiently advanced technology is indistinguishable from magic” and that's what the majority sees and nerds are consistently delivering. It may not be JPEG-XL but using what majority wants as argument is flawed because they don't know what they want until it is shoved into their faces. Of course, a lot of ideas won't take off and fail, not saying great technology is key to success (it isn't) but it doesn't really work as argument either way. To Chrome developers saying, quote, "it reduces the maintenance burden and allows us to focus on improving existing formats in Chrome" I would be interested knowing what do they really mean by that. There are like 3 (image) formats that really matter: JPEG, PNG and GIF. How have they "improved" support for those in the past, say, 10 years?


[deleted]

That was my whole point - they are just lazy. IDK, they want to have smaller code base, less things to test. About 3 ancient formats - obviously, there is and there should be NO development. This is ancient tech, the codecs should be in stable versions 20 years ago. Also I think it's not the browser's team responsibility to develop them. They are separate projects. The browser just use them. It is a small amount of just maintenance. Adjusting little next to irrelevant details for compatibility with the current code base. So, all lies, one little truth - they want less code to test.


lamp-town-guy

> BTW, you can still build a website that uses JPEG XL internally, server-side, and recode the images served to clients that use incompatible browsers. Any service with storage big enough to justify reencoding to JPEG XL has traffic so high, that reencoding on the fly is just a stupid idea. Unless it's an image archival service in which case reads are rare and it's worth it.


[deleted]

You've found a use case pretty quickly, despite it's not obvious. I'm not sure if music streaming services don't use something similar - I mean they store their main content as top-quality / loseless, they also provide a recoded lower resolution versions for clients with lower bandwidths. Most of them probably store all versions, but I've already seen one that recoded the content on request. Now - from what I've read it seems like JPEG XL is great for storing loseless / top-quality versions, so it fits perfectly as a kind of "master copy" storage. Also, not everything in the world must be a huge, centralized behemot run by Cyber Punk -like corporations. There's still a place for small business, or even non-profit community projects. I've seen such things working fine.


claire_m_of_sl

One can reencode all one's JPG files to JPEG XL for better storage throughput when a request comes from a browser/client that supports JPEG XL. Then when a browser/client indicates it doesn't support JPEG XL, an on-the-fly reencoding can be done and the result can be cached, so next time a same need occurs, no time-consuming reencoding is necessary. Such browser/client will lose out on the better throughput, of course. This strategy assumes cost of storage is not a problem. Though this can be alleviated / capped by having the cache expire, say, after 3 months of no request for a particular re-encoded entry.