Better Error Handling

22 comments

Checking for errors after every line (like in Go) is the worst. Used to do that in c/c++ calling win32 APIs. Know what happened when sloppy developers come along? They don’t bother checking and you have really bizarre impossible to debug problems because things fail in mysterious ways. At least with an exception if you “forget” to catch it blows up in your face and it’ll be obvious

Sure monads are cool and I’d be tempted to use them. They make it impossible for forget to check for errors and if you don’t care you can panic.

But JS is not Rust. And the default is obviously to use exceptions.

You’ll have to rewrap every API under the moon. So for Monads in JS to make sense you need a lot of weird code that’s awkward to write with exceptions to justify the costs.

I’m not sure the example of doing a retry in the API is “enough” to justify the cost. Also in the example, I’m not sure you should retry. Retries can be dangerous especially if you pile them on top of other retries: https://devblogs.microsoft.com/oldnewthing/20051107-20/?p=33...

Monadic style or not, the `if err != nil return err` pattern destroys critical information for debugging. `try/catch` gives you a complete stacktrace. That stacktrace is often more valuable than the error message itself.

> the `if err != nil return err` pattern destroys critical information for debugging.

Funny enough, your code looks like it is inspired by Go, and Go experimented with adding stack traces automatically. Upon inspecting real-world usage, they discovered that nobody ever used the stack traces, and came to learn that good errors already contains everything you'd want to know, making stack traces redundant.

Whether or not you think the made the right choice, it is refreshing that the Go project applies the scientific method before making a choice. The cool part is that replication is the most important piece of the scientific method, so anyone who does think they got it wrong can demonstrate it!

Please can you provide the article/mailing list where this was discussed along with their methodology?

[deleted]

I think you omitted the part and ergonomics of wrapping err in other err, as it bombs out of nested if statements. like a poor persons stack trace.

And using those utilities to test if an err is of a certain kind now that’s been wrapped a few times

I code Go professionally. I like the language. However, I vehemently disagree with the stance that error messages > stack traces. Error messages are in no way as helpful as stack traces. Ideally, you'd have both.

The scientific method doesn't care about your feelings, but it does care about your replication efforts. Under what circumstances did your research found stack traces to be necessary?

The idea that you need to run a study to criticize a design decision is stupid. A stack trace is ground truth, natural language is interpretative. The implications are plain to see.

> The idea that you need to run a study to criticize a design decision is stupid.

Not at all. Fundamentally, you do need understanding in order to criticize. "Criticizing" without understanding is merely whining. If your intent is to whine, you are certainly welcome to until your heart's content, but it will be fruitless. Without you having an understanding – and being able to articulate it – progress cannot be made. This should be obvious.

> A stack trace is ground truth

But a costly truth. Even languages that do pass around stack traces are careful to avoid them except under special circumstances, which is kind of nonsensical from a theoretical point of view. If you find them to be useful, you'd find them useful in all cases. However, it is a necessary tradeoff for the sake of computing efficiency.

With a few small changes to your codebase you can restore the automatic attachment of stack traces like the original experiments had. Stack traces are made available for you to use! But, it remains that the research showed that the typical application didn't ever use it, so it wasn't a sensible default to include given the cost of inclusion. "But, but I wish it were!" doesn't change reality like you seem to think it does.

You come off extremely rude, and for someone so enamored with the scientific method you make a lot of baseless assumptions about what I think and understand.

Understanding comes from all kinds of places. When a child touches a hot stove, they come to understand the consequences. That child doesn't gather 30 participants and record their reactions as they take turns burning their fingers. I'll leave you to extrapolate.

He's a troll. I regret feeding him.

> You come off extremely rude

How I come across has no bearing on what is said. This is irrelevant and a pointless distraction.

> Understanding comes from all kinds of places.

If you have an understanding then you've studied it. All that is lacking, then, is the communication of what is understood. Strangely, after several comments, still crickets on the only thing that would add value to the discussion...

This is the arrogance of language designers with limited experience developing real-world applications. Maybe it works as a replacement for C building low level apps, but it won't fly in enterprise codebases.

Go recognizes the arrogance of language designers (Pike is a team member, so it was hard for such a thing to go unnoticed!), hence why they put their theories to the real-world test instead of just guessing. Most languages seem to pack in feature after feature because some random nobody on the internet thought it would be a good idea without ever seeing if their thought could be proven as a good one, but Go tries to actually show it first.

Which is also why it draws so much ire. It speaks the truths developers don't like to admit.

With which million line enterprise codebase did they real-world test this?

They test with their community and get the biases of their community. Don't pretend it's more than that.

"Enterprise" being a euphemism for low-quality developers who don't know how to write quality software (the sloppy developers referenced in an earlier comment)? Perhaps not. Go does seem to scare off anyone who relies on crutches to work around shoddy work and inability.

But it doesn't really matter which codebases they used, does it? Replication efforts will reveal anything they got wrong. No need to make guesses.

"Enterprise" is a euphemism for line of business software that makes billions of dollars. Ie, the stuff that runs the real world.

I have written, and continue to write at another org, Go that drives line of business software that makes billions of dollars and runs the real world. Check a random email you have and if you see a header X-EID, it was processed with Go.

Please tell me about the scientific study relating to the utility of stack traces that was run in your codebase.

Individual codebases can already include stack traces as they see fit, so studying only a single codebase would be rather pointless. If an individual codebase benefits from stack traces, it will use them! The study looked at how often they were used to determine if it would be a useful default. It found, by looking at many different projects that had stack traces included by default, that it would not be useful to include by default. Especially in light of the cost of inclusion. Adding stack traces is definitely not free.

"Enterprise" traditionally refers to code that is full of bad practices. Like, when Java was all the rage, FooFactoryBuilderFactory was the embolism of enterprise. In other words, where sloppy developers are found. Glad we were able to clear up that isn't what you meant.

So you are meaning – with respect to the code – the same as all other software? What, then, is "enterprise" trying to add? It is not like you would write code that makes billions any differently than you would a code that makes a dollar.

That is why everyone says to wrap your errors in Go. %w ftw

Naked err returns can be a source of pain.

An advantage to the Monad approach is that it sugars to the try/catch approach and vice-versa (try/catch desugars to monads). JS Promises are also already "Either<reject, resolve>". In an async function you are writing try/catch, but it desugars to monadic code. You could write an alternative to a library like "neverthrow" that just wraps everything in a Promise and get free desugaring from the async and await keywords (including boundary conditions like auto-wrapping synchronous methods that throw into Promise rejections). You could similarly write everything by hand monadically/pseudo-monadically directly with `return Promise.reject(new Error())` and `return Promise.resolve(returnValue)` and everything just works with a lot of existing code and devs are quite familiar with Promise returns.

It might be nice for JS to have a more generic sounding/seeming "do-notation"/"computation expression" language than async/await, but it is pretty powerful as-is, and kind of interesting seeing people talk about writing Monadic JS error handling and ignoring the "built-in" one that now exists.

This is also where I see it as a false dichotomy between Monads and try/catch. One is already a projection of the other in existing languages today (JS Promise, C# Task, Python Future/Task sometimes), and that's probably only going to get deeper and in more languages. (It's also why I think Go being intentionally "anti-Monadic" feels like such a throwback to bad older languages.)

Moving from try:catch to errors as values was so refreshing. Same company, same developers, but suddenly people were actually _thinking_ of their errors. Proper debugging details and structured logging became default.

I assert that try:catch encourages lazy error handling leading to a worse debugging experience and longer mean time to problem discovery.

Checked exceptions also force people to think of their errors

Nice thing about Monads in JS with tools like neverthrow is that you can create the Monad boundary where you like.

It becomes very similar to try-catch exception handling at the place you draw the boundary, then within the boundary it’s monad land.

If you haven’t wrapped it in a monad, chances are you wouldn’t have wrapped it in a try-catch either!

Don’t accept sloppy development practices regardless of what programming language you’re going to use.

JS aside, I recently tried my very best to introduce proper logging and error handling to otherwise "look ma, no handlebars" codebase.

Call it a thought experiment. We start with a clean implementation that satisfies requirements. It makes a bold assumption that every star in the universe will align to help us achieve to goal.

Now we add logging and error handling.

Despite my best intentions and years of experience, starting with clean code, the outcome was a complete mess.

It brings back memories when in 2006 I was implementing deep linking for Wikia. I started with a "true to the documention" implemention which was roughly 10 lines of code. After handling all edge cases and browser incompatibilites I ended up with a whooping 400 lines.

Doing exactly the same as the original lines did, but cross compatible.

I guess I’ll ask, did you try using exceptions?

> We start with a clean implementation that satisfies requirements ... Now we add logging and error handling.

If error handling and logging isn't necessary to satisfy requirements, why bother with them at all?

Handlebars like on a bike, or like the templating language?

They mean the bike analogy

> An interesting debate emerged about the necessity of checking every possible error:

> In JS world this could be true, but for Rust (and statically typed compiled languages in general) this is actually not the case… GO pointers are the only exceptions to this. There are no nil check protection at compile level. But Rust, kotlin etc are solid.

Yes it actually is the case. You cannot check/validate for every error, not even in rust. I recommend getting over it.

For a stupid-simple example: You can't even check if disk is going to be full!

The disk being full is a real error you have to deal with, and it could happen at any line in your code through no fault of your own, and no it doesn't always happen at write() but can also when you allocate pages for writing (e.g. SIGSEGV). You cannot really do anything about this with code- aborting or unwinding will only ever annoy users, but you can do something.

We live in a multitasking world, so our users can deal with out-of-disk and out-of-memory errors by deleting files, adding more storage, closing other (lower priority) processes, paging/swapping, and so on. So you can wait: maybe alert the user/operator that there is trouble but then wait for the trouble to clear.

Also: Dynamic-wind is a useful general-purpose programming technique awkward to emulate, and I personally dislike subclassing BackTrack from Error because of what can only be a lack of imagination.

> We live in a multitasking world, so our users can deal with out-of-disk and out-of-memory errors by deleting files, adding more storage, closing other (lower priority) processes, paging/swapping, and so on. So you can wait: maybe alert the user/operator that there is trouble but then wait for the trouble to clear.

That's a weird take. I've been working for multiple decades now with systems that have no UI to speak of; their end-users are barely aware that there's a whole system behind what they can see, and that's a good thing because they become aware of it when it causes them trouble.

I take from my mentor in programming this stance for many things, including error handling: the best solution to a problem is to avoid it. That's something everybody knows actually, but we can forget that when designing/programming because one has so many things to deal with and worry about. Making the thing barely work can be a challenge in itself.

For errors, this usually means: don't let them happen. E.g. avoid OOM by avoiding dynamic allocation as much as possible; statically pre-allocate everything, even if it means megabytes of unused reserved space. Don't design your serialization format with quotes around your keys just to allow "weird" key names, a feature that nobody will ever use and that creates opportunities for errors.

Of course it is not always possible, but don't miss the opportunity when it is.

> That's a weird take

I appreciate that, but...

> I've been working for multiple decades now with systems that have no UI to speak of; their end-users are barely aware that there's a whole system behind what they can see, and that's a good thing because they become aware of it when it causes them trouble.

Notice I said "user" not "end-user" or "customer".

This was not an accident.

In your system (as in mine) the "user" is the operator.

> the best solution to a problem is to avoid it.

That's your opinion man. I don't know if you can avoid everything (I certainly can't).

Something to consider is why Erlang people have been trying to get people to "let it crash" and just deal with that, because enumerating the solutions is sometimes easier than enumerating the problems.

That’s not his opinion, that’s the standard technique in systems programming. It’s why there’s software out there that does in fact never crash and shows consistent performance.

> Something to consider is why Erlang people have been trying to get people to "let it crash" and just deal with that

Yes, if you can afford it, I would say it is a way to avoid the problem of handling errors in a bug-free way. But it is more than yet another error handling tactic, it is a design strategy.

> For a stupid-simple example: You can't even check if disk is going to be full!

Isn’t this addressed by preallocating data files in advance of writing application data? It’s pretty common practice for databases for both ensuring space and sometimes performance (by ensuring a contiguous extent allocation).

I don’t think it’s possible to get that to work 100% of the time on typical modern hardware.

As an example, a disk block may be bad, requiring the OS to find another one to store that pre-allocated disk space. If you try to prevent that by writing to the preallocated space after you allocated it, you still can hit a case where the block goes bad after you did that.

> Isn’t this addressed by preallocating data files in advance of writing application data?

Allocation isn't the only thing that can fail: Actually writing to the blocks can fail, and just because you can write zeros doesn't mean you can write anything else.

You really can't know until you try. This is life.

You’re not wrong, but you are moving the goalposts a little; GP is responding to your “disks is going to be full” scenario, and that is well handled I’d say by pre allocation.. then of course other things can go wrong too.

Yes things can go wrong. That's the point. The problem is what to do about it.

I think if you needed a better example of something you can't defend against in order to get the main idea, that's one thing, but I'm not giving advice in bad faith: Can you say the same?

The person you are replying to, has a reasonable position and explained it well. IMHO

No they really don't.

fallocate() failing is exactly the same as write() failing from the perspective of the user, because the disk is still full, and the user/operator responds exactly the same way (by waiting for cleanup, deleting files, adding storage, etc).

Databases (the example given) actually do exactly as koolba suggests, and ostensibly for the reason of surfacing the error to the application. The point is what to do about the error itself though, not about whether fallocate() always works or is even possible.

There is in fact a common strategy for dealing with those errors. Shut the process down. That relies on another strategy. Reliable persisted state. Best practice here is to use mechanisms that ensures that at every moment the persisted state is valid. Some databases can guarantee this. You can also write out the new state to a temp file and atomically replace the old state with the new one.

This. There are errors and states you cannot predict. As a grandchild comment says: It's easier to provide solutions than to list all the errors. Find your happy path and write code that steers you back on to it. The code will be shorter, less surprising, and actually describable. It's also testable because you treat whole classes of errors consistently so your error combinations count is smaller.

Errors as values approach suffers similar problem as async/await - it's leaky. Once the function is altered to possibly return an error, its signature changes and every caller needs to be updated (potentially all the way to the main(), if error is not handled before that).

This approach is great when:

* program requirements are clear

* correctness is more important than prototyping speed, because every error has to be handled

* no need for concise stack trace, which would require additional layer above simple tuples

* language itself has a great support for binding and mapping values, e.g. first class monads or a bind operator

Good job by the author on acknowledging that this error handling approach is not a solver bullet and has tradeoffs.

It’s only leaky if you do not consider failure cases to be as equally intrinsic to an interface’s definition as its happy-path return value :-)

[deleted]

Like most things in C++, I wish the default was `nothrow`, and you added throw for a function that throws. There's so many functions that don't throw, but aren't marked `nothrow`.

In my experience I've used exceptions for things that really should never fail, and optional for things that are more likely to.

If you squint hard enough, any potentially allocating function is fallible. This observation has motivated decades of pointless standards work defending against copy or initialization failure and is valuable to the people who participate in standardization for that reason alone.

For practitioners it serves mainly as a pointless gotcha. In safety critical domains the batteries that come with c++ are useless and so while they are right to observe this would be a major problem there they offer no real relief.

[dead]

Common Lisp has retries in addition to exceptions. Retry works almost the same way as exception except it allows exception handler to restart execution from the place it happened. I wish we have this in modern widespread languages.

It's strange that they didn't write about the Erlang /Elixir approach of

1. returning a tuple with an ok or fail value (so errors as values) plus

2. pattern matching on return values (which makes error values bearable) possibly using the with do end macro plus

3. failing on unmatched errors and trying again to execute the failed operation (fail fast) thanks to supervision trees.

Maybe that's because the latter feature is not available nearly for free in most runtimes and because Erlang style pattern matching is also uncommon.

The approach requires a language that's built on those concepts and not one in which they are added unnaturally as an afterthought (the approach becomes burdensome.)

Pattern matching: https://hexdocs.pm/elixir/pattern-matching.html

With: https://hexdocs.pm/elixir/1.18.1/Kernel.SpecialForms.html#wi...

Supervisors: https://hexdocs.pm/elixir/1.18.1/supervisor-and-application....

The three things I wish were more standardized in the languages I use are

1. Stacktraces with fields/context besides a string 2. Wrapping errors 3. Combining multiple errors

Observability tools give you this (as long as it can be handled and isn't a straight up panic).

Most of these proposals miss the point. Errors need a useful taxonomy, based on what to do about them. The question is what do you do with an error after you caught it. A breakdown like this is needed:

- Program is broken. Probably need to abort program. Example: subscript out of range.

- Data from an external source is corrupted. Probably need to unwind transaction but program can continue. Example: bad UTF-8 string from input.

- Connection to external device or network reports a problem.

-- Retryable. Wait and try again a few times. Example: HTTP 5xx errors.

-- Non-retryable. Give up now. Example: HTTP 4xx errors.

Python 2 came close to that, but the hierarchy for Python 3 was worse. They tried; all errors are subclasses of a standard error hierarchy, but it doesn't break down well into what's retryable and what isn't.

Rust never got this right, even with Anyhow.

Severity in majority of library functions is undecidable, it’s decidable at the call site instead. That’s why language should be providing sugar to pick behaviour - exceptions (propagate as is, optionally decorate/wrap), refute (error value, result type), mute/predicate-like (use zero value, ie undefined in js/ts).

> optionally decorate/wrap

If you are using exceptional handlers for transmitting errors instead of exceptions (i.e. what should have been a compiler error but wasn't detected until runtime), wrapping should be mandatory, else you'll invariably leak implementation details, which is a horrid place to end up. Especially if you don't have something like checked exceptions to warn you that the implementation has changed.

There's no universal taxonomy of "this error is retryable, this one non-recoverable"; it's context dependent.

As a boring example, I might write something that detects when a resource gets hosted, e.g. goes from 404 -> 200.

The best I imagine you can do is be able to easily group each error and handle them appropriately.

Well you don't usually want double retry loops, and sometimes that subscript error is because the subscript came from input.

What to do with an error depends on who catches it. That's probably why Python got it wrong and then Rust said worse is better

An interesting development in this direction is Clojure’s anomalies taxonomy: nine outcomes (two retriable, two maybe-retriable, five non-retriable; nine respective ways to fix)

See table here: https://github.com/cognitect-labs/anomalies

This is called the "result pattern". I would not call this a novel concept. In C# we use this: https://github.com/ardalis/Result

Yes, I stopped reading at:

> The most common approach is the traditional try/catch method.

Weird to stop reading at a statement that is factually true.

Returning error codes was actually the first approach to error handling. Exceptions (try/catch) became widespread much later. The article got it backwards calling try/catch "traditional" and Go's approach "modern".

Typed try/catch was tried in Java. The typing was not well liked and people voted with their feet to untyped exceptions. Euphoria turned to misery and golang emerged with returning errors explicitly. Overall I would say that the return value and error value shouldn't be split as in Golang. A result type that forces the user to account for the error when accessing the return is a much better approach. The compiler should make it fast.

> Exceptions (try/catch) became widespread much later

Exceptions, complete with try-catch-finally, were developed in the 60s & 70s, and languages such as Lisp and COBOL both adopted them.

So I'm not sure what you're calling "much later" as they fully predate C89, which is about as far back as most people consider when talking about programming languages.

Not in a codebase I develop or maintain. Nothing to see here, moving along.

Pretty sure this line: https://meowbark.dev/Better-error-handling#:~:text=return%20...

will immediately throw if b == 0, because

    a / b
is evaluated immediately, so execution never makes it into fromThrowable(). Does it need to be

    () => a / b
instead?

Similarly, withRetry()'s argument needs to have type "() => ResultAsync<T, ApiError>" -- at present, it is passed a result, and if that result is a RateLimit error, it will just return the same error again 1s later.

The caveat that this focuses on the JS ecosystem is important. JS error handling is notoriously terrible. Lots of the complaints here would be solved by native support for typed & checked exceptions, the latter of which was not mentioned in the article. Support for those two would be a big improvement, and we could still use the Optional/Result "errors as values" pattern in places where that is the more elegant approach.

I’m of the opinion that the best error handling, is to not encounter the error, in the first place.

That means good UX, intuitive interfaces, good affordances, user guidance (often, without requiring them to read text), and simplicity.

When an error is encountered, then it needs to be reported to the user in as empathetic and useful manner as possible. It also needs to be as “bare bones” simple as can reasonably be managed.

Designing for low error rates, starts from requirements. Good error reporting requires a lot of [early] input from non-technical stakeholders.

Errors often come from the fact that we build on unreliable medium.

Lost packets, high latency, crashed disks, out of memory etc.

You can talk to your users sure but you need to handle this stuff at some level either way. Shit happens!

Absolutely.

But we need to plan for it from Day One, and that can also include things like choosing good technology stacks.

Like I said, when inevitable errors happen, how we communicate (or, if possible, mitigate silently) the condition, is crucial.

[EDITED TO ADD] Note how any discussion of improving Quality of software is treated, hereabouts. Bit discouraging.

Correct. What errors can happen PLUS how we communicate (and what we do: roll-back transaction? Etc.) PLUS how do we ensure correctness of both (sane programing language, good idioms, testing, proofs etc.)

Anyone that has maintained a shipping product, can relate to this.

Often, there is disagreement over the definition of “bug.”

There’s the old joke, “That’s not a bug, it’s a feature!”, but I have frequently gotten “bug reports,” stating that the fact that my app doesn’t do something that the user wants, is a “bug.”

They are sometimes correct. The original requirements were bad. Even though the app does exactly what it says on the tin, it is unsuitable for its intended demographic.

I call that a “bug.”

Hey, thanks for mentioning my library, go-go-try. I just released a new version with a simplified API surface, check it out!

Please, please don’t graft on error handling schemes like this.

It go, use go style, in js use try/catch/finally.

Some junior engineer is going to stumble upon this and create a mess /bad week for the next poor soul who has to work with this.

not a solution for every single type of JS error, but reading through this I found myself wondering why not just use .then().catch() statements when making async calls.

Compared to try / catch with await, falling back to promises at least makes the error handling explicit for each request — more along the lines of what Go does without having to introduce a new pattern.

try catch - where you catch the right types of errors at the right level is hard to beat.

However, many make the mistake to handle any errors at the wrong level. This leads to really buggy and hard to reason about code and in some cases really bad data inconsistency issues.

A rule of thumb is to never catch a specific error which you are not in a good position to handle correctly at that precise level of code. Just let them pass through.

Very balanced post thank you. Often these posts tout an approach, and never consider downsides.

> Lack of Type System Integration

Well, IIUC, Java had (and still has) something called “checked exceptions”, but people have, by and large, elected to not use those kind of exceptions, since it makes the rest of the code balloon out with enormous lists of exceptions, each of which must be changed when some library at the bottom of the stack changes slightly.

> each of which must be changed when some library at the bottom of the stack changes slightly.

I hate checked exceptions too, but in fairness to them this specific problem can be handled by intermediate code throwing its own exceptions rather than allowing the lower-level ones to bubble up.

In Go (which uses error values instead) the pattern (if one doesn’t go all the way to defining a new error type) is typically to do:

    if err := doSomething(…); err != nil {
      return fmt.Errorf("couldn’t do something: %w", err)
    }
which returns a new error which wraps the original one (and can be unwrapped to get it).

A similar pattern could be used in languages with checked exceptions.

The biggest annoyance with Java checked exceptions IME is that it’s impossible to define a method type that’s generic over the type of exception it throws.

Checked exceptions should indicate conditions that are expected to be handled by the caller. If a method is throwing a laundry list of checked exceptions then something went wrong in the design of that method’s interface.

> The biggest annoyance with Java checked exceptions IME is that it’s impossible to define a method type that’s generic over the type of exception it throws.

Exactly. If Stream methods like filter() and map() could automatically "lift" the checked exceptions thrown by their callback parameters into their own exception specifications, it would solve one of the language's biggest pain points (namely: Streams and checked exceptions, pick one).

I like checked exceptions in general, but I agree, they didn't play well with the Streams API.

> it makes the rest of the code balloon out with enormous lists of exceptions

That's mostly developer laziness: They write a layer that calls the exception-throwing code, but they don't want to to think about how to model the problem in their own level of abstraction. "Leaking" them upwards by slapping on a "throws" clause is one of the lowest-effort reactions.

What ought to happen is that each layer has its own exception classes, capturing its own model for what kinds of things can go wrong and what kinds of distinctions are necessary. These would abstract-away the lower-level ones, but carrying them along as linked "causes" so that diagnostic detail isn't lost when it comes time for bug-reports.

Ex: If I'm writing a tool to try to analyze and recommend music that has to handle multiple different file types, I might catch an MP3 library's Mp3TagCorruptException and wrap it into my own FileFormatException.

It is laziness to an extent, sure, but that's a huge part of language design. We wouldn't use Java or C# or Python or any of these high level languages if we weren't lazy, after all, we'd be writing assembly like the silicon gods intended!

The problem with Java checked exceptions is they don't work well with interfaces, refactoring, or layering.

For interfaces you end up with stupid stuff like ByteArrayInputStream#reset claiming to throw an IOException, which it obviously never will. And then for refactoring & layering, it's typical that you want to either handle errors close to where they occurred or far from where they occured, but check exceptions forces all the middle stack frames that don't have an opinion to also be marked. It's verbose and false-positives a lot (in that you write a function, hit compile, then go "ah forgot to add <blah> to the list that gets forwarded along..." -> repeat)

It'd be better if it was the inverse, if anything, that exceptions are assumed to chain until a function is explicitly marked as an exception boundary.

When I say lazy, I mean the essential work of modeling what's going on and making a decision which can only be made by a human. In this respect, choosing what exceptions-types to throw is like choosing what regular-types to return. If I return a GraphNode instead of a DatFile, then I should probably throw a GraphNodeException instead of a DatFileChecksumException.

Syntactic sugar should make it easier to capture the decision after it's been made. For example, like replacing "throws InnerException" (perhaps a leaky abstraction) with something like "throws MyException around InnerException".

Yes but you only make those types of decisions on library boundaries, which is a relatively small amount of code. Meanwhile checked exceptions make all of the code harder to deal with in non-trivial ways (eg, the ubiquitous "Runnable" cannot throw a checked exception). And it's that everywhere-else where "laziness" won and checked exceptions died.

I think it's fair to say that having some sort of syntactically lightweight sum or union type facility makes this way nicer than anything Java ever had -- subclassing isn't really a solution, because you often want something like:

    type FooError = YoureHoldingItWrong | FileError
    type BarError = YoureHoldingItWrong | NetworkError
    fn foo() -> Result<int, FooError> { ... }
    fn bar() -> Result<int, BarError> { ... }
    fn baz() -> Result<String, BarError> { ... }
TypeScript's type system would hypothetically make this pretty nice if there were a common Result type with compiler support.

Rust needs a bit more boilerplate to declare FooError, but the ? syntax automatically calling into(), and into() being free to rearrange errors it bubbles up really help a lot too.

The big problem with Java's checked exceptions was that you need to list all the exceptions on every function, every time.

Java's sealed interfaces enable typing errors.

https://blogs.oracle.com/javamagazine/post/java-sealed-class...

Although syntactically lightweight it is the opposite of.

I agree; Java is constitutionally incapable of being lightweight. I much prefer Typescript's union syntax. I'm glad Python copied it.

records are lightweight

[deleted]

I love libraries that does a simple check and signals that it "failed" with ThingWasNotTrueException.

In surprising twist: Java has ConcurrentModificationException. And, to counter its own culture of exception misuse, the docs have a stern reminder that this exception is supposed to be thrown when there are bugs. You are not supposed to use it to, I dunno, iterate over the collection and bail out (control flow) based on getting this exception.

[dead]