> We also tested Rustls and its rustls-openssl-compat layer. Rustls could be an interesting library in the future, but the OpenSSL compatibility application binary interface (ABI) was not complete enough to make it work correctly with HAProxy in its current state.
Rustls is actively working on improving the OpenSSL compatibility layer. Hopefully we'll have it fully working for HAProxy soon!
We've also invested a lot in performance. Next week we'll be publishing a blog post about Rustls server side performance. Since it's relevant to discussions about TLS stack performance, here's a preview:
Wow... reading this article in full really made me lose hope in OpenSSL, the project and the library.
I was well aware of the expected inconveniences any new major OpenSSL release would trigger (esp. older, less actively maintained applications having to adapt their API usage to keep working) going in, but what the linked github issue/PR comments hint at is just... mental.
As best illustrated by https://github.com/openssl/openssl/issues/20286#issuecomment... not only seem the core developers not care about runtime performance at all, they also seem to have a completely absurd perception of their own project, esp. in relation to other, genuinely small FOSS projects.
It's just wild. I hope this can still be turned around, and OpenSSL can somehow re-emerge from this clusterfuck as the stable bedrock of web transport security that we learned to rely on from 2014 onwards.
iforgotpassword 12 hours ago [-]
I don't know how it happens but sometimes very old OSS projects turn into those bikeshedding projects completely disconnected from reality. It only serves the self-fulfillment of the developers. I've recently ranted about libgd here in another comment. Definitely not as bad, definitely not as mission critical as openssl, but same symptoms.
cryptonector 12 hours ago [-]
That's two years ago and the issues have been fixed. Think of it this way: OpenSSL 3.0 was a release that added new, cleaner APIs and deprecated older, uglier APIs, so the focus was on that and not performance, but they've since put effort back into performance.
And by the way: OpenSSL has always cared a great deal about the performance of their cryptographic primitives, but the issue you linked to is NOT about that but about a performance degradation that happened in the code that _loads_ and references cryptographic algorithms.
IMO it's pretty lame to link to that issue as symptomatic of bigger problems in the OpenSSL community. And I say that as someone who was fairly involved in that issue as a user (I'm not an OpenSSL dev). It borders on the dishonest. How about giving them some accolades for responding to the issue, engaging the user community, and addressing the problem satisfactorily?
And as for them not having all the hardware that users run on, the perf issue in question did not require any particular hardware. The issue was systemic and easily reproduced.
Indeed, the OpenSSL devs ended up greatly improving how they do thread synchronization in general, because the issue was a combination of: a) using mutexes only, b) over time too many places got "get a reference to the cryptographic alg" calls that exacerbated all the problems with only using mutexes. So now OpenSSL has an RCU-like mechanism for thread synchronization that replaces mutexes for many read cases. That's _exactly_ the sort of large response program [to a systemic performance problem] that one would expect by an upstream that is well-funded and cares about performance.
So really, OpenSSL issue #20286 is demonstrative of all the opposite of what you say! It demonstrates that OpenSSL
- is well-funded,
- has skilled developers,
- is responsive to their users,
and
- is willing to make massive
changes to their codebase.
Show us how all of that is true of the others before you attack OpenSSL for having had a performance problem.
BTW, it was I who proposed switching to RCU for these synchronization problems, and I half-expected to be told that that's not portable. Instead and after convincing them that the problem was quite real they jumped wholeheartedly into developing an RCU-ish solution. How often does it happen to you that you point out a serious architectural problem to a popular upstream and then they take it seriously and solve it as quickly as possible? I'm sure it's a rare occurrence. It sure is for me.
jiggawatts 4 hours ago [-]
> I'm not aware of any accessible hardware that goes beyond a small to, maybe at a stretch, moderate scale. As a small OpenSource project, we have to rely on the community to do this.
That's especially absurd considering that it's possible to rent a VM in the public cloud with hundreds of CPU cores for the price of a cup of coffee per hour!
I've seen several projects where their learned helpless transforms over years into an obstinate point of pride.
For example, refusing to provide Windows builds in an era of free build pipelines in GitHub, virtual machines, cross-platform build tools, etc...
"We don't do that here!" -- says person who could trivially do that.
stefan_ 9 hours ago [-]
OpenSSL was always the library developed by clowns, the only upside is it was used by everyone (so a sort of "can't be fired for buying IBM" situation). Big problem for libraries like WolfSSL, which is equally made by people that have a crypto hobby but doesn't have the distribution.
Hilift 19 hours ago [-]
[flagged]
wobfan 18 hours ago [-]
TIL that committing a bug disqualifies me from getting a PhD. Lucky that I stopped with my masters degree, that would’ve been a bad surprise
LtWorf 17 hours ago [-]
I hope there's no bugs in any of your projects or we might have to cancel your master's as well. Maybe even your high school diploma.
justinrubek 15 hours ago [-]
Well, at least you did a pretty good job of dismissing the point in the comment above you. Maybe that counts for something.
pixelesque 1 days ago [-]
I've been working on a project of mine which makes web requests via libCURL, and the number of memory allocations OpenSSL makes for a single TLS connection is astonishing - running it through `heaptrack` was a real eye-opener.
Discussion I found about other people mentioning it:
In some cases (not all) for my workflows perf record traces show the allocation / deallocation overhead is quite significant, especially in a multi-threaded setup, where contention against the system allocator starts to become a problem in some situations.
wtarreau 1 days ago [-]
Absolutely. Sometimes when using OpenSSL in performance tests, you notice that performances vary significantly just by switching to a different memory allocator, which is totally scary.
I hadn't seen the conversation above, thanks for the pointer. It's surrealistic. I don't see how having to support multiple file formats requires to invest so many allocations. In the worst case you open the file (1 malloc and occasionally a few realloc) and you try to parse it into a struct using a few different decoders. I hope they're not allocating one byte at a time when reading a file...
dur-randir 22 hours ago [-]
For anyone wondering, have fun reading responses to those:
I have no idea this happened, but after reading this article my main takeaway was that OpenSSL shot themselves in the foot. Is this a fair assessment? I mean the excessive locking just stands out to be ill-advised for this performance-sensitive piece of code.
owenthejumper 18 hours ago [-]
Yes
2bluesc 1 days ago [-]
Never heard of aws-lc before this, but now I'm looking for an excuse to use it.
juangacovas 21 hours ago [-]
Indeed, made some preliminary tests under RHEL 9 (Rocky, etc) for example and if you're used to compile HAProxy from sources to use specific OpenSSL versions, testing "aws-lc" is fairly straightforward. Their BUILD instructions and INSTALL file from HAProxy also help.
drob518 11 hours ago [-]
Well, that was an extremely thorough and well-documented take down of OpenSSL’s issues, both technical and organizational. The OpenSSL team should be embarrassed. What’s your side of the story, OpenSSL?
toast0 23 hours ago [-]
> Roman Arutyunyan from NGINX core team were the first to propose a solution with a clever method that abuses the keylog callback to make it possible to extract or inject the required elements, and finally make it possible to have a minimal server-mode QUIC support.
This sounds like future pain waiting to happen. My experience with callbacks in Openssl 1.0.x is that letter releases may significantly alter or remove callbacks. In the long term, I guess it worked out, because I figured out a better way to do what I was trying to do and got it accepted into OpenSSL and that fixed things for everyone, but in the short term it was a PITA and I'd have to decide between out of date OpenSSL with my feature working and updating OpenSSL to address whatever security problem of the day but my feature not working while I take a week to figure out how to do it in the new world.
In my case, the feature was doing DHE-RSA with Microsoft Schannel and not causing an 'out of memory' error on the client when the server public key has 8 or more bits of zeros at the high end. The client didn't actually run out of memory, but that was the error it reported.
Multithreading can be hard; when I had a problem for HAProxy to solve, 1.8 was just out and using multiple processes was much better for my application than threads. Configuration for multiple processes was more difficult, of course, but one time pain in order to get many multiples of throughput was worth it (and I didn't need any shared state between processes, so I didn't lose functionality... not everyone has that liberty) But I gather multithreaded HAProxy has gotten a lot better; and my problem went away and anyway I left the company whose problem I was solving with HAProxy anyway; so I don't have current knowledge. But the description of OpenSSL is describing a multithreading nightmare:
> With OpenSSL 3.0, an important goal was apparently to make the library much more dynamic, with a lot of previously constant elements (e.g., algorithm identifiers, etc.) becoming dynamic and having to be looked up in a list instead of being fixed at compile-time. Since the new design allows anyone to update that list at runtime, locks were placed everywhere when accessing the list to ensure consistency. These lists are apparently scanned to find very basic configuration elements, so this operation is performed a lot.
I'm not sure this kind of thing needs to be a fixed list at compile time (although it has worked for decades, yeah?); having a way to load a config and fixing the config before sharing it would be a good way to avoid the need for locking. If it does need to be changable at runtime, it can still be done with performance... an instance of configuration should be immutable, then you can have a shared pointer to the current config. At critical points, you might copy the current pointer or something. There's techniques and prior art for this kind of stuff.
> Modern implementations must support a range of TLS protocol versions (from legacy TLS 1.0 to current TLS 1.3)
So this statement is strange considering "modern" security standards either nudge you (or demand) to deprecate anything that isn't v1.3 or v1.2.
If the implementation is "modern" why would I allow 1.0 ?
This seems like a HA-Proxy problem. They ought to maintain support for geriatric TLS versions on a dedicated release branch connected to a support-model that nudges their client into updating by increasing their fees for maintaining that system. Not doing so means the vendor is part of the problem why we have slower adoption rates for 1.3 than we could otherwise have.
> "In 2015, AWS introduced s2n-tls, a fast open source implementation of the TLS protocol. The name "s2n", or "signal to noise," refers to the way encryption masks meaningful signals behind a facade of seemingly random noise. Since then, AWS has launched several other open source cryptographic libraries, including Amazon Corretto Crypto Provider (ACCP) and AWS Libcrypto (AWS-LC). AWS believes that open source benefits everyone, and we are committed to expanding our cryptographic and transport libraries to meet the evolving security needs of our customers."
Here is a pdf that provides some performance results for s2n (sadly not s2n-quic):
> If the implementation is "modern" why would I allow 1.0 ?
Because there's a distinction between "using" (especially by default) and "implementing".
The real world has millions (billions?) of devices that don't receive updates and yet still need to be talked to, in most cases luckily by a small set of communication partners. Would you rather have even the "modern" side of that conversation be forced to use some ancient SSL library? I'd rather have modern software, even if I'm forced to use an older protocol version by the other endpoint. Just disable it by default.
And it's not like TLS 1.0 and 1.1 are somehow worse than cleartext communication. They're still encrypted transport protocols that take significant effort to break. That you shouldn't use them if at all possible doesn't mean that you can't use them if anything else is impossible.
jeroenhd 7 hours ago [-]
Exposing TLS 1.0 leaves your connections vulnerable to BEAST. Requiring TLS 1.2 deprecates clients older than what, Android 4.4.2 and Safari 9? Maybe exceptional cases like IoT crapware and fifteen year old smart phones you might still need 1.1? I don't see why you'd want to take on the additional work and risk otherwise. In practice TLS 1.2 has been available for long enough that it should be the bare minimum at this point.
If I were to implement a TLS server today, I'd start at 1.2, and not bother with anything older. All of the edge cases, ciphers, protocols, config files, and regression tests are wasted time and effort.
tialaramex 14 hours ago [-]
> And it's not like TLS 1.0 and 1.1 are somehow worse than cleartext communication.
In reality humans can't actually do this nuance you're imagining, and so what happens is you're asked "is this secure?" and you say "Yes" meaning "Well it's not cleartext" and then it gets ripped wide open.
HTTP in particular is like a dream protocol for cryptanalysis. If in 1990 you told researchers to imagine a protocol where clients will execute arbitrary numbers of requests to any server under somebody else's control (Javascript) and where values you control are concatenated with secrets you want to steal (Cookies) they'd say that's nice for writing examples, but nobody would deploy such an amateur design in the real world. They would be dead wrong.
But eh, we wrote an RFC telling you not to use these long obsolete protocol versions, and you're going to do it anyway, so, whatever.
eqvinox 10 hours ago [-]
> In reality humans can't actually do this nuance […]
Luckily the cases that need this aren't normally about a wide user base, rather they only concern a bunch of developers and admins. Which is why I pointed out the default-off nature of this.
> But eh, we wrote an RFC telling you not to use these long obsolete protocol versions, and you're going to do it anyway, so, whatever.
You're losing audience with unnecessary hostility. Your post would've been much more effective with plain omitting this last paragraph.
toast0 11 hours ago [-]
> This seems like a HA-Proxy problem. They ought to maintain support for geriatric TLS versions on a dedicated release branch connected to a support-model that nudges their client into updating by increasing their fees for maintaining that system. Not doing so means the vendor is part of the problem why we have slower adoption rates for 1.3 than we could otherwise have.
If I understand what you're suggesting, it's that HAProxy should have their current public releases support only TLS 1.2 and 1.3, and a paid release that supports TLS 1.0-1.3 and that would encourge adoption of 1.3?
I would expect those users who have a requirement for TLS 1.0 to stay on an old public free release that supports TLS 1.0-1.2 in that case. If upgrading to support 1.3 would mean dropping a requirement or paying money, who would do it? How does that increase adoption vs making it available with all the other versions in the free release? Some people might reevaluate their requirements given the choices, but if anything that pushes abandonment of TLS 1.0 more than adoption of TLS 1.3.
I no longer have to support this kind of thing, but when you require dropping the old thing at the same time as supporting the new thing, you're forcing them to choose, and unless the choice is very clear, you'll have a large group of people that pick to support the old thing. IMHO, the differences between TLS 1.0,1.1, and 1.2 aren't so big that you can claim it's too hard to support them all, and dropping support for 1.0 and 1.1 on the server doesn't gain much security. 1.2 to 1.3 is a bigger change, if you wanted to only support 1.3, that's an argument to have, but I don't think that's a realistic position for a general purpose proxy at this point in time (it would certainly be a realistic configuration option though).
owenthejumper 18 hours ago [-]
AFAIK haproxy does not charge their users increased fees for legacy TLS.
You would be shocked how much legacy software there is, requiring TLS 1.0. Not saying that is a good thing, just a reality…
wtarreau 11 hours ago [-]
Just to be clear, we don't care at all about performance of 1.0. The tests resulting in the pretty telling graphs were done in 1.3 only, as that's what users care about.
OpenSSL 3.0 was catastrophic, but it looks like OpenSSL 3.5 isn't too bad.
jeffbee 15 hours ago [-]
It seems weird to disqualify boringssl due to live-at-head project philosophy. Obviously _your_ project doesn't have to change every time boringssl changes.
toast0 13 hours ago [-]
The only way that really works is if HAProxy includes the version of BoringSSL they want to use in their source tree. Otherwise, it's too hard for users to find the right version.
But you quickly get to the hard problems of when there are security fixes for BoringSSL.
Upstream is only going to fix it on head per their philosophy. So either someone has to backport the fix to whatever version HAProxy uses on their supported versions, or HAProxy has to update to head with whatever gymnastics required on all their supported versions.
In the meantime, it's probably more difficult for users to understand if their system has a vulnerable version of boringssl, because it's embedded into HAProxy, and not a separate package.
Given that, it makes a lot of sense to not consider it, when AWS-LC is a fork of BoringSSL that's intended to be used by 3rd party projects.
jeroenhd 7 hours ago [-]
Their project has LTS versions that need to support the stuff that was supported on the date the release was made. BoringSSL doesn't do versioned/LTS releases, so their code happily moves on, deprecating/altering/removing features as they see fit.
You could do fake LTS releases of BoringSSL, manually backporting features and fixes, but that's a lot of work with little pay-off. Once a feature gets removed at HEAD and bugs/vulnerabilities/whatever get found in that feature, you're stuck figuring it out yourself, and altering all future code that the feature depended on to keep it compatible.
Combining live-at-HEAD and LTS versions is a pain. There's nothing wrong with either approach, but combining the two is asking for trouble, especially for libraries where you can't just pin a commit and hope for the best like core security libraries.
edelbitter 1 days ago [-]
[flagged]
tptacek 1 days ago [-]
Willy Tarreau is the original author of HAProxy.
mplanchard 1 days ago [-]
[flagged]
dugite-code 1 days ago [-]
IMHO It reads like a corporate communication that's been re-worked 3 or 4 times by several people.
AI generated content always sounds like this because It's some of the most readily available content to train them on.
Rustls is actively working on improving the OpenSSL compatibility layer. Hopefully we'll have it fully working for HAProxy soon!
We've also invested a lot in performance. Next week we'll be publishing a blog post about Rustls server side performance. Since it's relevant to discussions about TLS stack performance, here's a preview:
https://docs.google.com/document/d/1xFoRjb7pn4ZtL5BH7_ZwXNgN...
I was well aware of the expected inconveniences any new major OpenSSL release would trigger (esp. older, less actively maintained applications having to adapt their API usage to keep working) going in, but what the linked github issue/PR comments hint at is just... mental.
As best illustrated by https://github.com/openssl/openssl/issues/20286#issuecomment... not only seem the core developers not care about runtime performance at all, they also seem to have a completely absurd perception of their own project, esp. in relation to other, genuinely small FOSS projects.
It's just wild. I hope this can still be turned around, and OpenSSL can somehow re-emerge from this clusterfuck as the stable bedrock of web transport security that we learned to rely on from 2014 onwards.
And by the way: OpenSSL has always cared a great deal about the performance of their cryptographic primitives, but the issue you linked to is NOT about that but about a performance degradation that happened in the code that _loads_ and references cryptographic algorithms.
IMO it's pretty lame to link to that issue as symptomatic of bigger problems in the OpenSSL community. And I say that as someone who was fairly involved in that issue as a user (I'm not an OpenSSL dev). It borders on the dishonest. How about giving them some accolades for responding to the issue, engaging the user community, and addressing the problem satisfactorily?
And as for them not having all the hardware that users run on, the perf issue in question did not require any particular hardware. The issue was systemic and easily reproduced.
Indeed, the OpenSSL devs ended up greatly improving how they do thread synchronization in general, because the issue was a combination of: a) using mutexes only, b) over time too many places got "get a reference to the cryptographic alg" calls that exacerbated all the problems with only using mutexes. So now OpenSSL has an RCU-like mechanism for thread synchronization that replaces mutexes for many read cases. That's _exactly_ the sort of large response program [to a systemic performance problem] that one would expect by an upstream that is well-funded and cares about performance.
So really, OpenSSL issue #20286 is demonstrative of all the opposite of what you say! It demonstrates that OpenSSL
Show us how all of that is true of the others before you attack OpenSSL for having had a performance problem.BTW, it was I who proposed switching to RCU for these synchronization problems, and I half-expected to be told that that's not portable. Instead and after convincing them that the problem was quite real they jumped wholeheartedly into developing an RCU-ish solution. How often does it happen to you that you point out a serious architectural problem to a popular upstream and then they take it seriously and solve it as quickly as possible? I'm sure it's a rare occurrence. It sure is for me.
That's especially absurd considering that it's possible to rent a VM in the public cloud with hundreds of CPU cores for the price of a cup of coffee per hour!
I've seen several projects where their learned helpless transforms over years into an obstinate point of pride.
For example, refusing to provide Windows builds in an era of free build pipelines in GitHub, virtual machines, cross-platform build tools, etc...
"We don't do that here!" -- says person who could trivially do that.
Discussion I found about other people mentioning it:
https://github.com/openssl/openssl/discussions/26659
In some cases (not all) for my workflows perf record traces show the allocation / deallocation overhead is quite significant, especially in a multi-threaded setup, where contention against the system allocator starts to become a problem in some situations.
I hadn't seen the conversation above, thanks for the pointer. It's surrealistic. I don't see how having to support multiple file formats requires to invest so many allocations. In the worst case you open the file (1 malloc and occasionally a few realloc) and you try to parse it into a struct using a few different decoders. I hope they're not allocating one byte at a time when reading a file...
- https://github.com/openssl/openssl/issues/20286
- https://github.com/openssl/openssl/issues/18814
- https://github.com/openssl/openssl/issues/16791
- https://github.com/openssl/openssl/issues/17950
This sounds like future pain waiting to happen. My experience with callbacks in Openssl 1.0.x is that letter releases may significantly alter or remove callbacks. In the long term, I guess it worked out, because I figured out a better way to do what I was trying to do and got it accepted into OpenSSL and that fixed things for everyone, but in the short term it was a PITA and I'd have to decide between out of date OpenSSL with my feature working and updating OpenSSL to address whatever security problem of the day but my feature not working while I take a week to figure out how to do it in the new world.
In my case, the feature was doing DHE-RSA with Microsoft Schannel and not causing an 'out of memory' error on the client when the server public key has 8 or more bits of zeros at the high end. The client didn't actually run out of memory, but that was the error it reported.
Multithreading can be hard; when I had a problem for HAProxy to solve, 1.8 was just out and using multiple processes was much better for my application than threads. Configuration for multiple processes was more difficult, of course, but one time pain in order to get many multiples of throughput was worth it (and I didn't need any shared state between processes, so I didn't lose functionality... not everyone has that liberty) But I gather multithreaded HAProxy has gotten a lot better; and my problem went away and anyway I left the company whose problem I was solving with HAProxy anyway; so I don't have current knowledge. But the description of OpenSSL is describing a multithreading nightmare:
> With OpenSSL 3.0, an important goal was apparently to make the library much more dynamic, with a lot of previously constant elements (e.g., algorithm identifiers, etc.) becoming dynamic and having to be looked up in a list instead of being fixed at compile-time. Since the new design allows anyone to update that list at runtime, locks were placed everywhere when accessing the list to ensure consistency. These lists are apparently scanned to find very basic configuration elements, so this operation is performed a lot.
I'm not sure this kind of thing needs to be a fixed list at compile time (although it has worked for decades, yeah?); having a way to load a config and fixing the config before sharing it would be a good way to avoid the need for locking. If it does need to be changable at runtime, it can still be done with performance... an instance of configuration should be immutable, then you can have a shared pointer to the current config. At critical points, you might copy the current pointer or something. There's techniques and prior art for this kind of stuff.
https://bholley.net/blog/2015/must-be-this-tall-to-write-mul...
So this statement is strange considering "modern" security standards either nudge you (or demand) to deprecate anything that isn't v1.3 or v1.2.
If the implementation is "modern" why would I allow 1.0 ?
This seems like a HA-Proxy problem. They ought to maintain support for geriatric TLS versions on a dedicated release branch connected to a support-model that nudges their client into updating by increasing their fees for maintaining that system. Not doing so means the vendor is part of the problem why we have slower adoption rates for 1.3 than we could otherwise have.
It would have been cool to see AWS's s2n-tls (or s2n-quic https://github.com/aws/s2n-quic) included in their benchmark.
One of my all time favorite episode from the SCW podcast goes into the design decisions of s2n:
The feeling's mutual: mTLS with Colm MacCárthaigh https://securitycryptographywhatever.com/2021/12/29/the-feel...
From AWS: https://aws.amazon.com/security/opensource/cryptography/
> "In 2015, AWS introduced s2n-tls, a fast open source implementation of the TLS protocol. The name "s2n", or "signal to noise," refers to the way encryption masks meaningful signals behind a facade of seemingly random noise. Since then, AWS has launched several other open source cryptographic libraries, including Amazon Corretto Crypto Provider (ACCP) and AWS Libcrypto (AWS-LC). AWS believes that open source benefits everyone, and we are committed to expanding our cryptographic and transport libraries to meet the evolving security needs of our customers."
Here is a pdf that provides some performance results for s2n (sadly not s2n-quic):
"Performance Analysis of SSL/TLS Crypto Libraries: Based on Operating Platform" https://bhu.ac.in/research_pub/jsr/Volumes/JSR_66_02_2022/12...
Because there's a distinction between "using" (especially by default) and "implementing".
The real world has millions (billions?) of devices that don't receive updates and yet still need to be talked to, in most cases luckily by a small set of communication partners. Would you rather have even the "modern" side of that conversation be forced to use some ancient SSL library? I'd rather have modern software, even if I'm forced to use an older protocol version by the other endpoint. Just disable it by default.
And it's not like TLS 1.0 and 1.1 are somehow worse than cleartext communication. They're still encrypted transport protocols that take significant effort to break. That you shouldn't use them if at all possible doesn't mean that you can't use them if anything else is impossible.
If I were to implement a TLS server today, I'd start at 1.2, and not bother with anything older. All of the edge cases, ciphers, protocols, config files, and regression tests are wasted time and effort.
In reality humans can't actually do this nuance you're imagining, and so what happens is you're asked "is this secure?" and you say "Yes" meaning "Well it's not cleartext" and then it gets ripped wide open.
HTTP in particular is like a dream protocol for cryptanalysis. If in 1990 you told researchers to imagine a protocol where clients will execute arbitrary numbers of requests to any server under somebody else's control (Javascript) and where values you control are concatenated with secrets you want to steal (Cookies) they'd say that's nice for writing examples, but nobody would deploy such an amateur design in the real world. They would be dead wrong.
But eh, we wrote an RFC telling you not to use these long obsolete protocol versions, and you're going to do it anyway, so, whatever.
Luckily the cases that need this aren't normally about a wide user base, rather they only concern a bunch of developers and admins. Which is why I pointed out the default-off nature of this.
> But eh, we wrote an RFC telling you not to use these long obsolete protocol versions, and you're going to do it anyway, so, whatever.
You're losing audience with unnecessary hostility. Your post would've been much more effective with plain omitting this last paragraph.
If I understand what you're suggesting, it's that HAProxy should have their current public releases support only TLS 1.2 and 1.3, and a paid release that supports TLS 1.0-1.3 and that would encourge adoption of 1.3?
I would expect those users who have a requirement for TLS 1.0 to stay on an old public free release that supports TLS 1.0-1.2 in that case. If upgrading to support 1.3 would mean dropping a requirement or paying money, who would do it? How does that increase adoption vs making it available with all the other versions in the free release? Some people might reevaluate their requirements given the choices, but if anything that pushes abandonment of TLS 1.0 more than adoption of TLS 1.3.
I no longer have to support this kind of thing, but when you require dropping the old thing at the same time as supporting the new thing, you're forcing them to choose, and unless the choice is very clear, you'll have a large group of people that pick to support the old thing. IMHO, the differences between TLS 1.0,1.1, and 1.2 aren't so big that you can claim it's too hard to support them all, and dropping support for 1.0 and 1.1 on the server doesn't gain much security. 1.2 to 1.3 is a bigger change, if you wanted to only support 1.3, that's an argument to have, but I don't think that's a realistic position for a general purpose proxy at this point in time (it would certainly be a realistic configuration option though).
You would be shocked how much legacy software there is, requiring TLS 1.0. Not saying that is a good thing, just a reality…
OpenSSL 3.0 was catastrophic, but it looks like OpenSSL 3.5 isn't too bad.
But you quickly get to the hard problems of when there are security fixes for BoringSSL.
Upstream is only going to fix it on head per their philosophy. So either someone has to backport the fix to whatever version HAProxy uses on their supported versions, or HAProxy has to update to head with whatever gymnastics required on all their supported versions.
In the meantime, it's probably more difficult for users to understand if their system has a vulnerable version of boringssl, because it's embedded into HAProxy, and not a separate package.
Given that, it makes a lot of sense to not consider it, when AWS-LC is a fork of BoringSSL that's intended to be used by 3rd party projects.
You could do fake LTS releases of BoringSSL, manually backporting features and fixes, but that's a lot of work with little pay-off. Once a feature gets removed at HEAD and bugs/vulnerabilities/whatever get found in that feature, you're stuck figuring it out yourself, and altering all future code that the feature depended on to keep it compatible.
Combining live-at-HEAD and LTS versions is a pain. There's nothing wrong with either approach, but combining the two is asking for trouble, especially for libraries where you can't just pin a commit and hope for the best like core security libraries.
AI generated content always sounds like this because It's some of the most readily available content to train them on.