Basically, if an assertion fails in the middle of an individual test, most test runners stop executing that test. Ava (by design) continues on with the rest of that test. This makes it harder to debug issues.
(This not be a problem for everyone. I'm only mentioning it because it was a deal-breaker for me and wish I knew that going in.)
> Tape and tap are pretty good. AVA is highly inspired by their syntax. They too execute tests serially. Their default TAP output isn't very user-friendly though so you always end up using an external tap reporter.
Nicer looking test output and serial execution seem to be the issues here, but I actually like the serial execution bit because it lets you write some messier tests (some that re-use a local test DB let's say) before you straighten up and write shared-nothing test-suites that can run in isolation (spin up a DB container/make a db-per-test-suite/etc).
I ended up just pointing Jest at my build output folder and that works. There are a couple minor annoyances but whatever.
(I actually started with Ava because it more directly supports using the existing build output, but I ran into another issue and had to switch back to Jest: https://github.com/avajs/ava/issues/2385)
One huge benefit of Jest though is the built-in mocking, including module mocking. This stuff is BYO in AVA. Module mocks in particular can be quite annoying to set up, so it's really nice that Jest pulls them all together in one cohesive package.
Workers are coming to Node... so soon.
Lately I’ve been trying to learn kotlin... and man... that stuff (gradle) adds a weird amount of boilerplate to everything.
JS / TS on the other hand. It’s a package.json, or tsconfig. Not much else is required.
Jump into deno and you don’t even need those. Feels like JS is one of the better languages in this regard
It's a noticeably-more-concise-and-flexible option than what came before it, and the net result has probably been good. But it's next to impossible to understand and troubleshoot when things don't go perfectly. And similarly difficult to figure out what you need to do to achieve X, because who knows, it could be in any of thousands of locations, called anything, and there's not enough structure to let you infer what's reasonable and what isn't.
We're still not as bad as mobile development, but things are getting worse.
I swore off massive impenetrable abstractions like React eons ago and I'm saner for it. Abstractions are supposed to remove complexity, not treat users like morons.
The fact that there are debugging tools at the React level does reduce complexity. It means you don't need to understand the details of how React is implemented. You can just think in terms of React concepts (which are quite straightforward and have excellent documentation).
I am not really sure I understand this stance. My JavaScript ultimately just boils down to C++, no? C++ ultimately just boils down to machine code, right? Yet you clearly wouldn't try to tackle those things at that layer.
Without more statements on what this bug was and why there weren't error messages or why it was so hard to recreate you felt you could only look at it thru a debugger, I really completely fail to understand your point.
> Abstractions are supposed to remove complexity
React does remove a ton of complexity from writing frontend - good luck composing HTML & JS components with just jQuery - but in exchange it also introduces some, of course. So they make that new and different complexity easier to handle with their dev tools.
> I swore off abstractions ... [that] treat users like morons
I really have a tough time to read this charitably. What on earth are you on about?
Really curious what you mean by that.
There are two possible Chrome (not React) extensions for dev tools that I can think of — one is the "official" extension by React core team that helps visualise components better and also helps understand why they may have rerendered. The other is a redux extension that's very helpful to inspect all the actions that were dispatched against the store and the changes they caused in the redux state. None sounds like what you are describing. Bugs with React state should be easy to diagnose just with a handful of console logs.
I don't disagree that it's pretty obtuse, despite loving working in it. I miss being able to look up the source of Backbone.js and gain a real understanding of why something was happening. Maybe I'll get to that point in React, but it seems much harder to grasp what the source is doing.
When you hit your first list of elements and need to rerender the contents of just one of the elements, you will begin writing your own React.
If you try to implement some kind of "two way databinding", you will first create React, then create Angular.
However, unreadable and disorganised code can be written at any abstraction level. I have found React projects to be particularly hideous due to the fixation engineers have for tools like Redux.
Angular tends to be a bit better because it's got some decent software design principles built in - but you know front end engineers. Any project with ngrx is a write off from an readability perspective.
Isomorphic applications are also a meme that kill readability.
I write most of my personal projects using Preact and some kind of reactivity helper (RIP `Object.observe`, I grieve for you).
The last time I used the extension, it was obnoxious to use to try to find "state" within the page. Using redux makes things easier to debug from this perspective but you do need yet another extension for that too...
I really think this is a sign of young engineers who have no experience with older approaches just memorizing "how things are done". They have no perspective on the tooling or experience.
I feel like that sometimes. Not all bad though. I've learned that being spartan with your approach not only frees the mind of clutter but your stack wont be delicate shitshow of obscure tools 3-4 years down the line.
Back then the comparison was made with rails, but it still holds true with modern web development now more than ever.
I feel like I went through exponentially more faffing about setting up the web server and forwarding to the accompanying Go server on DigitalOcean than I did getting my project running in modern browsers just using tsc.
What I've pinned it to is a cultural problem in the JavaScript realm: everybody wants to impress everybody else about how smart they are, and in the process of doing so, creates convoluted mazes. As a side-effect, motivation dips and then quality control falls in concert, leading to an "at least it works" mentality.
That's my curmudgeon speculation/observation, at least.
Compare it to the hardware level. The technology required to build a modern CPU is fucking amazing compared to things like being aware of cache coherency when writing code. The amount of time and experimentation to regain the knowledge required for building the full supply chain a modern CPU would be monumental compared to learning to go from something like JS to C.
It’s even easier than that though.
Open chrome, type “how to write computer programs” and go from there.
Sure, reading the two inch thick book on programming your TI-83+ was informative, but God help you if you got stuck.
Now we've got Scratch (visual programming), Arduino (push button C compilation!), and StackOverflow full of the answer to most any problem of stacktrace you encounter.
I don't have to read a tome for hours or wait a day or two for a response on a creeky BBS. Much easier.
> npm init > npm i --save-dev typescript
Add "build":"tsc" to your package.json "scripts" Add a tsconfig.json file
> npm run build
Finito - you've compiled a TS project to JS
If you have an existing JS project that's being converted to TS, you likely have bundling, linting, and CI handled already; for the most part, you just add `preset: typescript` to your existing Babel config and you're done.
It's from the author of Vue, and support both Vue and React, including JSX and TS.
It has the qualities of Vue:
- it works out of the box
- setup and usage are super easy
- it's wicked fast
- documentation rocks
I talked about it to one of my client 3 days ago. They tried it 5 minutes and decided to spend a day migrated their project immediately.
It's that good.
And the migration took only 2 hours.
I quite like how React Native does it. They have git diffs of a created project for each released version. So if you want to migrate from version A to B you just see the diff and apply those changes yourself.
Not ideal but it really helps when you want a good mix of configurability while avoiding incompatible divergence.
Also, parcel Just Works for a lot of problems.
If there is a Node-specific library, i.e. a lib that uses the require global, the fs module, etc., then you can, in many cases, use the Node compat module from the Deno std lib and just drop it into your project.
That being said, there's of course a Deno-specific ecosystem of libs, too, i.e. libs that use the Deno global. So you might find a suitable replacement if your favorite Node lib is not compatible for some reason.
After working with node.js for more than eight years I decided to give deno a try (https://github.com/bermi/genetic) and I’ve been gratefully surprised by the development experience.
I’m just missing a simple way to include deno code on my existing node.js and browser projects.
I'd say it has lower friction in general since Deno is using Web APIs, e.g. the fetch API is built-in (see https://deno.land/manual@v1.9.1/runtime/web_platform_apis).
One of good things about Deno is that it has golang-like opinionated/standardized workflow.
My preferred approach is to let thy primary modules be thy commands. That way everything is tidy, documented, and clear for your users. Example:
// module commandName is a module to parse a command, exclusion list, and options apart from other arguments of process.argv
import commandName from "./lib/terminal/utilities/commandName.js";
// module commandList is the list of modules that serve as commands for your application
import commandList from "./lib/terminal/utilities/commandList.js";
// module command_documentation provides documentation to the terminal about your commands and each of their options
import commands_documentation from "./lib/terminal/utilities/commands_documentation.js";
// module vars is a generic configuration store available across the application
import vars from "./lib/terminal/utilities/vars.js";
(function terminal_init_execute():void {
// command documentation
vars.commands = commands_documentation;
// supported command name
vars.command = commandName() as commands;
commandList[vars.command]();
}());
What it looks like on the terminal: node myApplication myCommandThe fact that it could be either bash or PowerShell is a primary reason why this tooling is there. Bash scripts work great (I use them) if everyone coding on the project shares an operating system. Not so much once you cross that boundary.
To give a sense of the lengths the JS community goes to address these compatibility issues, consider that Yarn 2 actually implements its own shell for running `package.json` scripts in a cross-platform way.
But I see having it all written in JS is still simpler.
...
Compiling swc_ecma_transforms v0.45.3
error[E0004]: non-exhaustive patterns: `MaxFilesWatch` not covered
--> /Users/weston/.cargo/registry/src/github.com-1ecc6299db9ec823/deno_runtime-0.11.0/errors.rs:75:9
|
75 | match error.kind {
| ^^^^^^^^^^ pattern `MaxFilesWatch` not coveredbrew install deno # works with no issues and much faster
Though, at the bottom of https://deno.land/#installation, one sees the incantation:
cargo install deno --locked
I removed brew's deno, installed rust (via brew) and then tried this. No compile errors. And swc_ecma_transforms compiled successfully.
...
Compiling swc_ecma_transforms_proposal v0.13.1
Compiling swc_ecma_transforms_optimization v0.15.3
Compiling swc_ecma_transforms_typescript v0.14.1
Compiling swc_ecma_transforms_react v0.14.1
Compiling swc_ecma_transforms v0.45.1
Compiling gfx-auxil v0.8.0
...
Installing /Users/i034796/.cargo/bin/deno
Installing /Users/i034796/.cargo/bin/denort
Installed package `deno v1.9.0` (executables `deno`, `denort`) % nix-shell -p deno
these paths will be fetched (18.39 MiB download, 57.83 MiB unpacked):
/nix/store/vr6lw60nav6kd0qjkb61lbb78mpx4pry-deno-1.8.2
copying path '/nix/store/vr6lw60nav6kd0qjkb61lbb78mpx4pry-deno-1.8.2' from 'https://cache.nixos.org'...
[nix-shell:~]$ deno --version
deno 1.8.2 (release, x86_64-apple-darwin)
v8 9.0.257.3
typescript 4.2.2* It hasn't been updated for six months
* It doesn't appear to support TypeScript 4 properly[1]
Obviously, YMMV but just wanted to give a +1 to a tool that made my life a little bit easier recently!
In my case, that was not my requirement, and I started with tsdx and regretted it. It's way too much for an early project -- you should really add these things as you find out that you need them. In my case the main issue was I was using it in a monorepo and having duplicated tooling and watch scripts in each library was not great for memory usage nor build times nor hot reload times. (In fact, even by itself, tsdx had some really bad build times compared to compiling the code as part of a Next.js build).
In terms of monorepo, FWIW I'm now using .tsconfig project references [0], Yarn Berry workspaces, next.js externalDir experimental flag, judicious usage of `extends:`, and a shared package with common development scripts. I'm fairly happy with the setup now, but it was a huge PITA to get there, and a lot of the features only became available in the last year. But I'm relieved to be relying on officially supported TypeScript features, operating under the assumption that the TS team is incentivized to keep improving build times for this use case (which, in fact, they use in their own repo).
I've also been eyeing Rush [1] for monorepo management but haven't pulled the trigger yet. I think they are making all the right decisions. The real challenge with a monorepo, and one I haven't fully solved yet, is balancing the tradeoffs of "passive compilation" (importing from the source of sibling packages) vs. bundling each package separately. As a small team, passive compilation is somewhat okay, and tsconfig project references kind of enforce boundaries, but on a larger team it could become problematic. On the other hand, bundling each package separately is not a great dev experience when each bundler is eating 1gb of RAM and running its own hot reload process.
[0] https://www.typescriptlang.org/docs/handbook/project-referen...
I am probably not the target demographic for tsdx or other bootstrap libraries in the ecosystem but I would love to have something "simple" with good defaults which I can debug if something breaks to make CLI scripts to familiarise myself with the language and ecosystem.
Currently the best option for me is to manually setup the directory structure and build commands which is not fun for mostly simple throwaway code.
golang and rust seem to have things right in this regard though rust build artifacts can get quite big.
Nice step by step, explaining what each tool does and how things work together.
I’m still leaning towards just learning Elm and give up on the JavaScript ecosystem, but an article like this gives me a little hope that at least it seem realistic to get started with TypeScript.
Hot reloading for both server and browser is what's important.
And by being inflexible, this is one of the reasons there a bazillion tools and steps required just to have a basic project setup.
They're not using the Unix tools anyway. It's not like they're using entr and make to watch and build, but let's reinvent the wheel and use nodemon and gulp/bower/webpack/esbuild/snowpack/...
Maybe overkill, but it's saved me from having to remember which recent project to reference to remember exactly how I like eslint set up, etc. Been a very smooth experience!
Anyone here has some experience with ReScript (or ReasonML, for what it matters) to share with the rest of us?
ReScript seems to have come a long way since I last looked at it (it was just called ReasonML at the time I think), so it'd be worth looking into.
Though I admit I've kind of moved more towards even more opinionated Prettier than ESLint lately.
[1] https://www.npmjs.com/package/eslint-config-standard-with-ty...
Mostly helpful to keep a shared ruleset across projects.
We've hooked up "eslint --fix" and "prettier --write" to "npm run fix".
One command before committing and you comply with all our coding rules. Easy, peasy. We tie the combined checks into "npm run check", and run this in CI/CD, so we know only compliant code is merge. Check!
Most other lang tools _won't_ do this. Take PMD or Checkstyle for Java. They'll just fail your build, but won't make your code compliant.
Finally, if you decide you want to add a new code convention to your repos, it's just a one liner: "npm run fix". It doesn't get much easier.
No floating promises, no bad templating, etc.
Generally, linting is things that are valid in the language, but still better to do otherwise.
It looks like esbuild doesn’t have websocket auto-reload magic so will need to refresh the web browser to see the recompiled changes.
Maybe I am misunderstanding but this feels like it is provided via the `--bundle` flag in esbuild [1].
> It looks like esbuild doesn’t have websocket auto-reload magic so will need to refresh the web browser to see the recompiled changes.
I have also noticed that it does not seem to have the auto-reload part [2]. It does however have a server so that is at least something (e.g., `--servedir=dist`). Though that seems like a very new development.
[1] https://esbuild.github.io/getting-started/#your-first-bundle [2] https://github.com/evanw/esbuild/issues/802
using absolute language to accuse people of lack of benefit of doubt
The complexity/required reading that comes with integrating ~5+ individual tools and libraries you have to be aware of across one language can be made a lot easier by using make to do the plumbing between these tools.
Often I find that I need to do some of the following:
- do just a tiny bit of pre-processing
- use two disparate tools which might interact/have an ordering requirement
- alias a simple command for a repetitive task
And it's the case that:
- The tools/libraries I'm using don't have the functionality built in
- I'm not excited about writing a bash script
- A <programming language> script/build-hook feels too heavy
I find that Makefiles are a really good way to boil down and standardize my builds across projects.
Here's a chunk from a somewhat recent project:
psql: export DB_CONTAINER_NAME=$(shell $(KUBECTL) get pods -n $(K8S_NAMESPACE) -l component=db -o=name | head -n 1)
psql:
$(KUBECTL) exec -it $(DB_CONTAINER_NAME) -n $(K8S_NAMESPACE) psql -- --user $(DB_USER)
And another that's a bit less trendy: image: check-tool-docker
$(DOCKER) build \
-f infra/docker/Dockerfile \
-t ${IMAGE_FULL_NAME_SHA} \
.
image-publish: check-tool-docker
$(DOCKER) push ${IMAGE_FULL_NAME_SHA}
image-release:
$(DOCKER) tag $(IMAGE_FULL_NAME_SHA) $(IMAGE_FULL_NAME)
$(DOCKER) push $(IMAGE_FULL_NAME)
And earlier in the file is the actual language-specific stuff: lint: check-tool-yarn
$(YARN) lint
Yeah, all I'm doing is just calling through to yarn here but the nice thing is that `lint` is a pretty common target from project to project, so most places `make lint` will do what I think it is going to do.And if you're wondering what the check-tool-<x> targets are like:
check-tool-yarn:
ifeq (,$(shell which $(YARN)))
$(error "yarn not installed (see https://yarnpkg.com)")
endif
I will warn that there are people with Makefile PTSD, but I am finding a lot of value from it these days, would encourage people to take a look. If you're really ready for some fun check out recursive make -- managing and orchestrating subprojects is really really easy (and you don't have to care what the sub project is written in at the top level). make psql user=my_name # ?
To me, having to type `user=` is very annoying, I want to do just `make psql my_name`.Agreed that Make is nice :- ) My Makefile is not as nice as Make though o.O
> I'm not excited about writing a bash script
Bash scripts almost could be illegal :- ) I'm thinking about writing scripts in Deno in the future, instead
I also like Python for this but it's an absolute PITA to distribute scripts that support lots of different versions of python or need any dependencies not in your distro's repos.
$ USER=my_name make psql
The above example is the creation of a temporary ENV variable (assuming your shell supports this feature, it's equivalent to doing an export but only for that line), and the USER variable (accessed by using $(USER) or ${USER} in your makefile somewhere) will be automatically populated.That is distinct from the following:
$ make psql USER=my_name
What you've done there is actually make-specific, you're giving make a Make variable, so Make will know about $(USER)/${USER}, and it will also show up, because Make exposes it's variables to shells (each line is it's own shell) that is run[0]. So this Makefile: .PHONY: my-tgt
my-tgt:
echo -e "double dollar to print a bash var => $$MYVAR"
Produces the same thing -- $$MYVAR is actually a way to use a shell variable (the $ has to be escaped), but you get the same result putting USER= before and after: $ MYVAR=from-the-shell make my-tgt
double dollar to print a bash var => [from-the-shell]
$ make my-tgt MYVAR=from-make
double dollar to print a bash var => [from-make]
All that said, if you don't need to ever change the name then just make the target `psql-<username>` or `psql_username`, problem solved! For example, I use kubie on my kubernetes cluster, and I have explicitly named clusters, so when I want to enter a kubie context (just so I can avoid some typing), I run `make <cluster name>-kubie` and I'm off -- I didn't feel the need to do `make kubie CLUSTER=<cluster name>` though I easily could have. Here's what it actually looks like: mycluster-kubie:
CLUSTER=mycluster $(KUBIE) ctx -f secrets/ansible/the.dns.name.of.my.server/var/lib/k0s/pki/admin.conf
I don't strictly need that CLUSTER=mycluster there, but if the command cared, then it could be easily "hard coded".> Bash scripts almost could be illegal :- ) I'm thinking about writing scripts in Deno in the future, instead
Bash is super powerful, and some people are good at it, but it just hasn't stuck enough for me to be comfortable. Plus, it still feels like another language to be wrangled while Make's only purpose is to do the things (whatever that thing is) that build your software.
[EDIT] figured I might share some more nice pieces of software that I use and love (also so you don't think I'm just willy nilly checking in secrets to my repository):
- direnv[1] for managing local ENV settings in a given folder (like your project folder)
- git-crypt[2] for simple encryption of secrets in a git repository, with gpg support
- entr[3] it re-runs a given command when files change, really good for reducing feedback loops when working with tooling that doesn't have --watch/fsnotify functionality
[0]: https://www.gnu.org/software/make/manual/html_node/Environme...
[1]: https://direnv.net/docs/hook.html
Just because someone wants to control a UI element to the nth pixel doesn't mean we should let them do that. At this point I'm thinking it might be less bandwidth, complexity, and easier to maintain to just present a web page as a giant imagemap. SVG, so it scales. With screen-ratio-based media queries rather than guessing DPI based on inaccurate factors like the number of pixels.
SVG is XML, so you can do exactly that if you want to. Hook React up to it and go nuts. You can have the browser build a DOM tree out of it for you and use many of the same APIs you would on HTML, if you don't want to use React. Seriously, this is a thing you can actually do.