Shell.how: Explain Shell Commands

URL: shell.how
17 comments

The parser needs work; it thinks "${DEBUG:-0}" is not valid shell.

(explainshell.com is only slightly better; it doesn't explain the parameter expansion, which would be very useful as there's 30 different kinds of expansion. see https://www.gnu.org/software/bash/manual/bash.html#Shell-Exp...)

I tried

    tar xvzf 1.zip
and got

    xvzf: is neither a valid subcommand, option, or argument.

I've switched to LLM's for explaining commands. Here's ChatGPT4o:

The command tar xvzf 1.zip appears to be trying to extract a compressed archive file, but there's a mismatch between the file extension (1.zip) and the command options used for tar. Let me break it down:

tar: This is the command-line utility for working with tar archives (tape archives). It can be used to create, extract, and manipulate tar files.

x: This option tells tar to extract files from an archive.

v: This stands for "verbose", which means that tar will list the files being extracted to the terminal.

z: This option tells tar to handle a compressed archive using gzip. It is typically used with .tar.gz or .tgz files.

f: This tells tar that the next argument is the name of the archive file.

However, the file 1.zip is a ZIP file, not a tar.gz file. ZIP files are handled by the unzip command, not tar. Using tar xvzf on a .zip file would result in an error, as tar expects a .tar.gz file when using the z option.

To properly extract a .zip file, you should use:

unzip 1.zip

This command will extract the contents of the ZIP file.

Well it got the flags right, but it's kind of wrong too. I mean it's true that GNU tar will not handle a Zip archive, but libarchive BSD tar is more than happy to handle most common archive formats, including Zip archives.

    $ tar -a -cvf file.zip a.exe
    a a.exe

    $ file file.zip
    file.zip: Zip archive data, at least v2.0 to extract, compression method=deflate

    $ tar xvf file.zip
    x a.exe
This may seem like a silly catch-22, but the truth is a lot of people who will need to ask questions like this are in fact likely to encounter BSD tar. macOS ships with BSD tar with Zip support, and I believe even Windows does nowadays. I don't think you'll have an unzip command in vanilla Windows so that could easily waste someone's time if they didn't realize it.

bruh. why did we need to involve llms?

you could do a man pages on steroids — i.e explains what the args do

a tiny asterisk is that llms will hallucinate and spit out bs

Isn't it true that you must provide the hyphen before `xvzf`?

You are right. And it seems that's the case with several other arguments that follows `tar`such as `cvf`, `xvf`, `czvf`...etc

Wow, fig looks so cool. Shame it's shutting down. I don't see a clear explanation for how Amazon Q replaces it.

As far as I understand, the autocompletion specs, which is what powers this tool, are the only part of fig that is in fact Open Source

https://github.com/withfig/autocomplete

Local first, from the command line using llamafile version 0.2.1

    Explain the shell command: git commit -m "hello world"
The `git commit` command is used to create a new commit in Git. A Git commit is a snapshot of the project at a particular point in time, with the ability to track changes made and revert to previous versions if needed.

In this specific command: - `git commit` is the shell command itself. - `-m` option is used to specify a message for the commit. This message provides context and information about the changes that were made in the commit. - `"hello world"` is the argument provided for the `-m` option, it's a simple string that serves as the commit message.

So, this command will create a new Git commit with the message "hello world".

[deleted]

Doesn't sound very useful to me. I tried entering something like this:

     sed 's/y/i/g' <<<'xxyyzz'
I was not satisfied with the explanation.

What happens with this?

  sed 's/y/i/g' <<-''EOL
  xxyyzz
  EOL

I am honestly asking and not trying to be a smartass:

What are the advantages of LLM summaries over man pages/google searches/stackoverflow threads?

I could maybe empathize if an engineer only had 30 seconds to understand a code snippet or a large shell command, but how often is that really the case? Is the time gained worth the risk of hallucination or incompleteness?

Or perhaps engineers are not the target audience?

I can provide a use case, although I don't use this site specifically, I do use LLMs (generally Claude) to get command line syntax fairly regularly.

The answer is it's faster and (in my experience) generally accurate for common command syntax. I don't remember all the params for a variety of commands but Claude can get them quickly and easily. Where (IME) LLMs fall down is for more obscure commands but then those are harder to find via traditional searches anyway :D

Googling is very hit or miss these days you end up with a load of sponsored results and then have to try and find an example with ideally the exact syntax you're looking for.

> The answer is it's faster and (in my experience) generally accurate for common command syntax.

If you don't mind, around what level of complexity are you querying it for? Are the queries along the lines of "how do I find a keyword in a directory in UNIX?" or more along the lines of "write a bash script that will do foo recursively and treat edge case bar by doing baz"?

In my experience, if the queries are closer to the latter, it takes less time to "man page" the holes in my knowledge than to identify and to fix whatever is wrong with the LLM's initial guess. If you more often than not receive correct answers, even in non-trivial cases, could you please provide the LLM and any custom parameters if they exist? I'd be happy to be proven wrong and learn something in the process.

> Googling is very hit or miss these days you end up with a load of sponsored results and then have to try and find an example with ideally the exact syntax you're looking for.

May age be a factor here? I grew up as search engines were becoming a thing and ignoring "fake" results has become second nature.

Kind of both. Simple things like the switches on ffmpeg to rotate a video, is one example. I do that infrequently and Claude got it right in one. Quicker than reading the ffmpeg man page for that information by a long way. Also basic utility scripting something like "give me a golang program that connects to a Docker instance and returns information in this format". I could write it but a lot slower than an LLM

As to the age thing, well I doubt it. I started on the Internet in 1995, and I've been on it ever since. Sure I can skim by fake results but it slows it down as the good results are ever fewer on the page. I can do it but LLMs are faster and avoid all of that.

Also by using them more, I get a better sense for what will work and what won't work, improving my velocity again. It's easy to spot that they're good for things that are widely done, so more common and widely used commands and languages.

They fail hard on novel or new things, for example OTel code is one place where they're bad, and also deeper parts of things like k8s.

I tried ChatGPT 4 for some time and went back to just using manpages after several wrong results. Sometimes results were bad even after providing information about my platform.

Have there been posts already from people accidentally deleting files caused by trusting LLMs too much?

[deleted]

Pretty cool, though seems very similar to explainshell.com.

Also noticed that it breaks with single quotes, for example, this works:

    curl -H "Some-Header: value" http://example.com
But this causes it to error:

    curl -H 'Some-Header: value' http://example.com

How’s this different from explainshell?

Explainshell seems to have parsed manpages and extracted arguments, while this tool uses the fig autocomplete specs

Sounds like explainshell's approach is more robust, no?

If you check the git commit example on the homepage, it for some reason describes the git command as "The stupid content tracker"

so does its man page https://git-scm.com/docs/git

The readme[1] expands on that:

> "git" can mean anything, depending on your mood.

> - random three-letter combination that is pronounceable, and not actually used by any common UNIX command. The fact that it is a mispronounciation of "get" may or may not be relevant.

> - stupid. contemptible and despicable. simple. Take your pick from the dictionary of slang.

> - "global information tracker": you're in a good mood, and it actually works for you. Angels sing, and a light suddenly fills the room.

> - "goddamn idiotic truckload of sh*t": when it breaks

[1]: https://github.com/git/git/blob/e83c5163316f89bfbde7d9ab23ca...

"This is a stupid (but extremely fast) directory content manager. It doesn't do a whole lot, but what it _does_ do is track directory contents efficiently."

Ah Linus..

Alternatively pipx install llm

explain() { # Use llm to generate a simple explanation for a command. # Example: explain 'fortune | cowsay' local system_prompt="Be brief. Mention only serious errors, if any, otherwise just explain the code." local query="$1" llm "$query" --system "$system_prompt" "${@:2}" }

explain 'fortune | cowsay'

This command combines two fun Unix/Linux utilities:

1. `fortune`: This program generates a random quote or short message, often humorous or thought-provoking.

2. `cowsay`: This is a program that generates ASCII art of a cow (or other character) with a speech bubble containing a message.

The pipe (`|`) between them takes the output of `fortune` and feeds it as input to `cowsay`.

So when you run this command, you'll see an ASCII art cow "saying" a random fortune message. It's a playful way to display random quotes or messages in the terminal.

Example output might look like:

_________________________________________ / Q: How many Marxists does it take to \ | screw in a light bulb? A: None: The | \ light bulb contains the seeds of its / \ own revolution. / --------------------------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || ||

`git blame -C -C -C` does not give a proper description

One of these for awk scripts would be amazing.