NAME="$1"
# Create directory for the code
mkdir "$NAME"
# Checkout the source
cd $NAME && git something
# and so on
Sometimes with an echo "You need to read and understand this before just using it"; exit 1 thrown in somewhere for good measure.
These files will have no logic, and make very little use of variables. They will also have a .txt filename to ensure that people understand, that first of all, this is a set of instructions for how to do something, which just so happens to be a valid bash script.
# General form
dig @<resolver> <record>
dig @1.1.1.1 www.example.com
# Omit the resolver to use the system's configured resolvers
dig www.example.com
# Don't need the full output?
dig +short www.example.comI realized that I've written scripts that are "executable checklists", but didn't see it as a design pattern until reading this article. Typically such a script starts out as fully manual (do this, do that next, copy here, paste there..) then I replace steps as actual code, where I can, while some steps remain manual if needed.
Great use of such checklist scripts is on-boarding collaborators or new team members in a project.
Also, as grandparent comment pointed out, they reduce cognitive load by keeping the state of a process outside the head.
It allows for a pretty natural transition:
- (Markdown) Manual
- Do-nothing-script (printing out the manual in steps)
- Partial automation
- Full automation
Another article, with HN discussion: https://news.ycombinator.com/item?id=20293246
Doing manual steps sucks, wouldn't it be great if this was automated for the next person? Be the change you want to see in the world.
But all this gets back to something I used to say a lot, and the more refined version I say now. I used to say "Have humans do what humans do well; have computers do what computers do well". The key part of that was the idea that computers can't do what humans do. Now, I say "Anything a computer can do well, a human cannot do well". Which includes those pesky twenty step manual processes.
When automating a complex manual process, the first thing I do is write comments describing the sequence, and any parameters needed. Parameters in particular can be enlightening, because when manual processes grow organically, naming consistency becomes a source of suffering - humans can just sort of figure out the right thing to type, computers can't intuit these things and they must be made explicit. It's worse when it can't be explicit without updating the process or creating some sort of data store for mapping.
This is why naming things is one of the two hard problems in computer science.
If you want more "interest" during the process, make it a point to refine the documentation for your process (clarify a step here, split one command into two steps). Then you have a very clear spec by the time the process needs to be automated.
These scripts organize tasks into flows in a way that makes it possible to split work by logical checkpoints. The tasks between these checkpoints are dependent on one another in the overall flow, but have independent implementations that can be delegated to software or done by humans.
I can see the usage in an isolated team of a few people that will write these scripts for their own benefit.
I am opposed in general with having scarce information around. Better have everything a single place, whether that's a guide on how to provision a user or is how to add a new endpoint to your API or write a new end to end test.
It’s very different, in at least two ways:
- If your checklist is on a wiki, people may stop referring to the checklist once they’re familiar with the process. That’s bad if there are important changes. But if the process is “run this script and do what it says”, I think people are much more likely to keep doing that.
- When you start automating steps, people using the checklist pick up that automation for free. With a checklist on a wiki, any new automation means the user has to do something differently (eg maybe step 10 is “run this script” instead of “log on to the dev console and disable write access”)
Unless we have script shells that can universally launch a web document side-by-side from the shell prompt where it needs to be used, having it being displayed from the shell itself step by step will reduce the mental load required to follow the instructions. Also, the benefit of having an already semi-automated process in order to build a script in the future is not negligible.
Keeping track of which step you are on while following a long doc is no easy task when some steps needs to be left unattended (and produce scrolling terminal output making viewing the last-run command harder).
Most documentation software (i.e. Wikis) do a poor job of encapsulating this. The "do-nothing" scripts might be easier to maintain, but it forces you to use plaintext.
I recently started an open source project that might be more helpful for teams (if for no other reason than it uses Markdown): https://faq.dhol.es/@Soatok/public-beta/what-is-faq-off
You can keep these scripts versioned in a repository next to the actual product code, with all benefits like forking, merging and pull-request reviews.
I mention this because I worked at a company where checklists were typically simply added to a wiki and required separate processes for things like peer review.
An example is approving new users to access a service. Say your list of users is a YAML file. Someone edits the file, fires off a PR, waits for someone to approve it, merges it, and applies the config. You might think it's so easy to do by hand, automating it would be a waste. But if you actually add up the time it takes for humans to do those steps, plus the lead-time from creation to completion, plus the occasional typo, wrong username, or Norway bug, it can take from hours to days to actually complete the task. By automating steps one at a time, you gradually move toward the entire process being a single script that takes a username, and completes the task in about 30 seconds. This is a small win, but added up over all your operations, reduces hundreds of hours of toil and reduces human error. If every engineer automates one step of one runbook per week, over a year you'll have hundreds of reusable automation steps.
To the OP, I'd add the following: 1) use the least-complex way to accomplish the task. If you don't need to write Python, don't. Bourne shell is your friend. Re-use existing tools and automated features. 2) Add documentation to the code that can be auto-generated, so you can use the script as long-lived static documentation. Documenting somewhere else will lead to stale docs. 3) Run all your runbooks from a Docker container. 4) Plan to run these runbooks-in-containers from a CI/CD system using paramaterized builds. 5) Organize your reusable automation steps by technology, but have product-specific runbooks that call the automated features. 6) In a big org, a monorepo of automation features is an efficient way to expose automated features to other teams and ramp up on shared automation quicker.
I start with my "template" script and fill it out.
totally over-engineered for most tasks, but then as the tasks mature they have a firm foundation.
very simplified example:
#!/usr/bin/python
import sys,os,argparse
parser = argparse.ArgumentParser(argument_default=None)
parser.add_argument('-d', '--debug', action='store_true', help='debug flag')
parser.add_argument('files', nargs='*', default=[], help='files to process')
arg = parser.parse_args()
def die(errmsg,rc=1):
print(errmsg, file=sys.stderr)
sys.exit(rc)
if not arg.files:
die('filename required')Of course, to make sure it was efficient enough to be helpful it had to be a configurable terminal app I could quickly update and review via cli and abundant io flags/options.
It also made sense to write it in Scheme because I had been thinking I wanted to try Lisp earlier that day.
Fast-forward three weeks and I find myself hopelessly trying to refactor this strangely ugly-beautiful ascii-art themed effort tracker run by an abomination of labrynthine "functional" source and realize (maybe just finally admit to myself) it was all an excuse to avoid all that shit I'd convinced myself I was going to acheive by kanbanning in the first place.
The best part is I regret nothing!
Last time, I figured I'd start going at it gradually. So what I did was I turned the checklist into a bash script, telling me what to do at each step. I also implemented the ones that are easy to automate. The plan is to knock out two or three items each time I do the backup, until it's fully automated.
It's basically the minimal version of a standalone workflow automation; it's got crappy state storage and is really only good for a single human execution unit per script (or perhaps multiple swapped sequentially rather than in parallel), but it's an instance of a fairly well-known class of do-something software.
https://gist.github.com/mbarkhau/60fc4bbe505914369ebd2fec1ab...
I'm very much against documenting concrete steps in a wiki if it is something that can be scripted. Such documentation becomes something that you must maintain, and faulty documentation is worse than no documentation.
I think that wiki documentation should be used to summarize workflows and provide context, but if you're writing step-by-step instructions, often you can just write a script with exactly the same amount of effort.
I call it Hoist - from "hoisting up" concrete, hard-coded values into abstract symbols for reuse. It's complete vaporware at this point, just a place to collect my thoughts for a future essay.
I stumbled into doing roughly the same in my macOS system bootstrap script. I found it from the other side, though--usually only after failing to find a good way to automate a step, or realizing automation would be vastly more complex/fragile than just prompting myself to do something manually at the right time.
Probably a more useful concept when it's conscious from the getgo. :)
if you are working at enterprise - then these processes gotta be implemented according to ITIL in a specialized IT Service Management system - like ServiceNow, which allows building process flows like directed acyclical graphs and automation of necessary steps
The difference is that it's a more gradual entry to automate the task.
One could, for example, replace any item with a script/program invocation, while leaving the others untouched. It's an intelligent, piecemeal step towards automation.
I would also add that in really critical tasks, one could automate _either_ the item, or it's verification. Sometimes it's easier to check that something is correct than to do it-- or vice-versa.
However, I don't think it's a good idea. Checklists are not bad, but they are to be executed by humans.
I think trying to automate an existing human process directly is a mistake. Human processes often have features that only humans can do, such as pattern recognition or adapting to small difference. It's often easier to create a computerized (automated) process from scratch than to try to account for all the edge cases that humans might have to deal with (and do without problems).
(The opposite is also true, processes that computers have no issues with can make trouble for humans. For example, humans have imperfect memory. So if the process asks humans to keep track something for extended periods, it can be difficult.)
I disagree. People employ checklists precisely in the situations where it is most critical that every step be executed, in order, as written. The airline industry flies on checklists.
>I think trying to automate an existing human process directly is a mistake. Human processes often have features that only humans can do, such as pattern recognition or adapting to small difference. It's often easier to create a computerized (automated) process from scratch than to try to account for all the edge cases that humans might have to deal with (and do without problems).
Now, you're letting perfect prevent you from ever attempting the good. TFA is explicitly about building a checklist that can morph the automatable steps into automated ones.
I am not arguing against checklists per se. At work, I work on (a rather old) software product which also largely runs on checklists. (Various installation and maintenance procedures are described by a checklist.)
> Now, you're letting perfect prevent you from ever attempting the good.
No. I just, based on my experience with checklists, argue that they are often poor starting point. Often they rely on abilities of humans to execute them properly.
For example, the checklist might read "edit this text file and add this and this line after line that is like this". This is difficult to automate correctly - the proper way might be perhaps to generate or keep a correct copy of the text file so automation can pick it up.
Humans are good at adapting checklists for the purpose at hand, computers aren't. That makes (human-oriented) checklists somewhat difficult to automate.
1. We have a tedious manual checklist 2. We’d like it to be fully automated 3. How do we get there from here??
You’re correct to say that if the process had been fully automated from the start it may well have been implemented very differently. But those scripts are hard to write, and basically useless until they’re complete.
Often the only practical way to do this stuff is incremental refinement, where each small step adds a little bit of value.
The trick is figuring out where to begin. I think the suggestion in this post is a wonderful way to get over that first hurdle.
If you try to emulate a person with a checklist, then you will often find it is actually harder to automate, because the steps often rely on human adaptability and common sense.
1. Write SQL to automate
2. Put SQL with output in Metabase
3. Have it manually checked alongside day-to-day operation
4. Iterate 1-3 (you often will get a lot of feedback)
5. Replace the manual work, now that the SQL it checked on a lot of cases in production
Whats a good way to generalize such a python script so that windows and nix users can run it?
What benefits would that give?
The big advantage is someone else can pick up from where you started.
And reporting is built in as you can save the notebook and review it later.
The only thing it lacks is control flow. A good operations script has exit points: if an operation fails, you often want to rollback and abort. Though even there, you can save state in a variable and have a check.
all joking aside, the reason he's doing classes seems to be an intention towards making a FooStep (and/or a BarStep) which could then maybe handle generic Foo stuff (authentication comes to mind) in the future?
Allows you to do both local and ssh based interactions
They're a breed with a very strong sense of "It works so I don't care", and honestly I don't really blame them.
Whether or not the author was a Java programmer shouldn't matter - I think one should try to writing idiomatic code in any language, unless there are reasonable reasons against it.
The whole point of public keys is that the private key never has to leave the box it’s generated on. Once you go sharing them you might as well just use a random passphrase.
curl -Ls github.com/turbo.keys >> ~/.ssh/authorized_keys
Simplified of course. A script is usually used to revoke that access shortly after. Plus 2FA SSH.If an IT guy sent me “here’s your new private key”, even through 1Password, I’d say “no thanks.”