Is there any solution beside TLS for data-in-transit protection?
Rejecting TLS out of fear an attacker could have stolen a private key is like rejecting medicine out of fear someone could have poisoned it. The downsides far outweigh the risk of not using it.
Why TLS is secure
There are many many angles to answering this, but one that speaks for itself is that everyone is using it. Government agencies are using it, huge tech companies are using it, banks are using it, hospitals are using it, StackExchange is using it too. If TLS would be insecure, don't you think at least someone would decide that it's a bad idea to use it and switch to someone else? The fact that TLS is nearly ubiquitous and recommended everywhere by everyone should tell you that it's a good idea to use it.
Furthermore, TLS is secure, if configured correctly. Version 1.3 makes this a no-brainer, as of the time of this writing, there is no wrong way to configure TLS 1.3. TLS 1.2 is a bit more difficult, since there are some insecure ciphers in TLS 1.2. SOGIS - which also use TLS - have an extensive guide on which ciphers are recommended, and which are still tolerable for legacy usage.
Finally, compromise of the private key leading to insecure communication isn't a flaw with TLS that some other protocol could alleviate - it's a flaw inherent to network communication. If you assume that an attacker has full control over your server, then no matter what protocol you're using, the attack would be able to decrypt any traffic, manipulate any traffic and forge new traffic.
In other words, TLS is not designed to protect against server compromise, and neither TLS nor an alternative to TLS - self-made or otherwise - would protect against that.
Alternatives to TLS
Since the question asks for alternatives to TLS, there is one: DTLS
DTLS is basically TLS, but for UDP. The reason you might want to use DTLS is because your application uses UDP instead of TCP (e.g. a VoIP program), but you still want your datagrams to be encrypted.
DTLS is not more or less secure than TLS, but instead is designed to work with a different underlying layer.
For web clients, you are out of luck, the only two protocols supported by browsers are HTTP (without any security) and HTTPS (with TLS).
For mobile applications, if you really must use another protocol than TLS 1.3, here is my recommendation:
First, you can use a library like libsodium to encrypt data and handle any cryptographic function. Have the server's public key in the application to authenticate the server. To authenticate the clients, you can derive a key from the user's password.
Second, and that's the most important, transmit this encrypted data in a TLS 1.3 tunnel, over HTTPS. This way, you can tell your management that you are resilient in case TLS is broken, but you still benefit of all the security provided by TLS that you cannot achieve with your custom implementation.
You can try to do something similar for your web application using a JavaScript cryptography library. However, please keep in mind that is is only to ease the management requirements. In practice, this adds zero security against an active eavesdropper. JavaScript cryptography is only useful when the user trusts the server and the connection, the later needing TLS.
Also, instead of naming your protocol TLS, you can name it by the full name of the cipher suite used by TLS, like "ECDHE_RSA_WITH_AES_GCM_SHA256" for example (it's the current suite for security.stackexchange.com).
This seems like an X-Y problem. The question is "Management are concerned about the potential for TLS related attacks. How do I layer more encryption on TLS?". Other answers have covered how you could layer on additional encryption, and why you shouldn't, but let's look at how you should deal with the real problem: management's concern. Let's turn their concerns into threats we can model.
One threat you mentioned is that the server private keys are compromised. This does not automatically lead to your users being compromised, since the attacker also needs to intercept traffic between your users and your servers. This is not impossible (there are a few attacks that could achieve this, such as DNS or BGP related attacks, or your users using a fake WiFi hotspot, maybe some ARP attacks if they can get onto the same local network as a user), but it does require a second vulnerability.
There are a few mitigations you can put in here. One is to rotate your server keys frequently, so an attacker doesn't have access for very long. Keeping your server keys in a hardware security module also makes this much much harder to exploit. Another is to only support cipher suites with forward secrecy, since this requires attacker to actively intercept, rather than passively eavesdrop on your communication. If your application supports it, you could use client-side certificates, which would mean an attacker would need to obtain both client and server certificates to successfully MITM the connection. Another important mitigation is to put intrusion detection in-place on your servers. You can also potentially monitor for BGP and DNS related anomalies, which would prevent the attack escalating to the point of exploitability.
You also mentioned MITM as a possible threat. This is the flipside to the previously mentioned risk, and again needs an additional break in order to become exploitable. Leaked server certificates are one (as discussed before), but there are others, such as certificate mis-issuance or SSL stripping.
You can mitigate the MITM risk by proactively monitoring for certificate mis-issuance. Check https://crt.sh for new certificates issued for your domain. If you control both sides of the communication, you can eliminate the possibility of certificate mis-issuance by not using or trusting certificates from a public CA at all, but issuing certificates from an internal CA. You should also look at HSTS and potentially HPKP (although the latter has risks of its own). If your service is only offered in a well defined geographical region, GeoIP blocking may also help. Client side certificates also help here. Secure cookies can help with the SSL stripping risk too.
So the remaining question becomes what mitigations can be put in place against the threat of an attacker who has both successfully obtained your server private keys for your domain, and has man-in-the-middled your users? At this point, it's probably worth mentioning that whilst defence-in-depth has value, it also has a cost: complexity. There is a possibility that the added complexity of the new measures add more surface area, and hence more risk, than they mitigate. And if you've got most of the mitigations detailed above, this becomes a very unlikely threat.
Still, there are some potentially viable mitigations here. One is to design the system with data-level encryption, that the server does not have the keys to (only the user does). This way, your system may still be somewhat secure even if your server is totally compromised. Another is simply to limit how much trust is put in the web interface - maybe you can ensure that admin actions can't be done from the public web interface, only from an internal interface that's only available from your internal network. Confirming high-value actions out-of-band may also help (e.g, phone users to confirm when they ask to do a high-value action).
But as others have pointed out, these risks probably aren't the most pressing your application faces. The threats of attackers identifying an off-the-shelf component with a known vulnerability, identifying an SQL injection vulnerability, discovering that one of your APIs doesn't do the authorization checks it's supposed to, finding a SSRF vulnerability that leaks your AWS keys, finding an endpoint with CSRF or XSS weaknesses, discovering user data or account credentials in an unsecured S3 bucket, phoning your support desk claiming to be your system administrator and have forgotten their password, or just launching a DDoS attack, are far more likely than the threats in the question.