Documenting endpoints one by one sucks. This project originated from us needing it at our past jobs when building 3rd-party integrations.
It acts as a local server proxy that listens to your application’s HTTP traffic and automatically translates this into OpenAPI 3.0 specs, documenting endpoints, requests, and responses without much effort.
Installation is straightforward with NPM, and starting the server only requires a few command-line arguments to specify how and where you want your documentation generated ex. npx autospec --portTo PORT --portFrom PORT --filePath openapi.json
It's designed to work with any local website or application setup without extensive setup or interference with your existing code, making it flexible for different frameworks. We tried capturing network traffic on Chrome extension and it didn't help us catch the full picture of backend and frontend interactions.
We aim in future updates to introduce features like HTTPS and OpenAPI 3.1 specification support.
For more details and to get started, visit our GitHub page (https://github.com/Adawg4/openapi-autospec). We also have a Discord community (https://discord.com/invite/CRnxg7uduH) for support and discussions around using OpenAPI AutoSpec effectively.
We're excited to hear what you all think!
It is like the TDD approach, design before build.
Writing or generating tests after you build the code, is the same as this. It is guessing what it should do. The OpenAPI specification, and the tests should tell you what it should do, not the code.
If you have the specification, everyone (and also AI) can write the code for you to make it work. But the specification is about what you think it should do. That are the questions and requirements that you have about the system.
In that initial implementation period, it's more time-consuming to have to update a spec nobody uses. Maintaining specs separately from your actual code is also a great way to get into situations where your map != your territory.
I'd instead ask: support and use API frameworks that allow you to automatically generate OpenAPI specs, or make a lot of noise to get frameworks that don't support generating specs to support that feature. Don't try to maintain OpenAPI specs without automation :)
I'm even writing a side project now where I'm defining the API using OpenAPI and then running a generator for the echo Go framework to generate the actual API endpoints. It takes just a few minutes to create a new API.
The point is twofold: you test your API immediately AND you get a ton of boilerplate generated.
So many products out there just feel like a bunch of separate things with a spec slapped on top. Sometimes the spec doesn’t make sense. For example, the same property across different endpoints having a different type.
Save yourself time and do it right from the get go.
> Maintaining specs separately from your actual code is also a great way to get into situations where your map != your territory.
So yeah, write your spec once and generate all servers and clients from it…
OpenAPI spec seems intended to be consumed, not written. Its a great way to convey what your API does, but is pretty awful to write from scratch.
I do wish there was a simpler language to write in... JSON-based as well that would allow this approach of writing the spec first. But alas, there is not, and I have looked a loooot. If anyone has suggestions for other spec languages I'd love to learn!
I've been using a tool to generate OpenApi from code, and am pretty happy with that workflow. Even if writing the API before logic, I'd much rather write the types and endpoints in a real programming language, and just have a `todo` in the body.
You can still write API-driven code without literally writing OpenApi first.
Sadly, there is a distinct lack of tools to make spec-first development easier. At the moment, Stoplight [0] is the only game in town as a high quality schema editor, but it requires payment for any more significant usage.
But as some comments below point out, an OpenAPI spec is a pain to create manually which is why TypeSpec from Microsoft is such a great tool. Lets you focus on the important bits of creating a solid API (model, consistency, best practices) in an easy to use DSL that spits out a fully documented OpenAPI spec to build against! see https://typespec.io/
Generating OpenAPI spec from the server code has always felt significantly better for me.
Example: I used to work at a place that had a massive PHP monolith, developed by hundreds of devs over the course of a decade, and it was the worst pile of hacky spaghetti code I’ve ever seen. Unsurprisingly, it had no API spec. We were later doing tonnes of work to clean it up, which included plans to add an API spec, and switch to a spec-first design process (which we were already doing in services split from the monolith), but there was a massive existing surface area to spec out first. A tool like this would’ve been useful to get a first draft of the API spec up and running quickly for this huge legacy backend.
Incoming request params, became validated and casted object properties. Outgoing response params were validated and casted according to spec.
In the end I think it worked really well, and loved not needing to maintain the spec separately. The annoying bit was adjusting the library when the spec changed.
And some gnarly bits of the spec that weren't easy to implement logically.
At any rate, it also made for a similar experience of considering the client experience while writing/maintaining the api.
@Path("/people")
public class PeopleApi {
@Path("{personId}")
@GET
public Person getPerson(@PathParam("personId") int personId) {
return db.getPerson(personId);
}
}
It's easy to generate a spec for a JAX-RS class because it has the paths, parameters, types, etc. right there. There's a GET at /people/{personId} which returns a Person and takes a path parameter personId which is an integer.If we're talking about a Go handler which doesn't have that information easily accessible, I understand wanting to start with a spec:
func GetPerson(w http.ResponseWriter, r *http.Request) {
personId, _ := strconv.Atoi(r.URL.Path.something)
person := db.GetPerson(personId)
w.Write(json.marshal(person))
}
func GetPerson(c echo.Context) error { //or with something like Echo/Gin
id := c.Param("id")
person := db.GetPerson(id)
return c.Json(http.StatusOK, person)
}
In Go's case, there's nothing which can tell me what the method takes as input without being able to reason about the whole method. With JAX-RS, it's easy reflect on the method signature and see what it takes as input and what it gives back, but that's not available with Go (with the Go tools that most people are using).This isn't meant as a Go/Java debate, but more a question of whether some languages/frameworks basically already give you the spec you need to the point where you can easily generate an OpenAPI spec from the method definition. Part of that is that the language has types and part of it is the way JAX-RS does things such that things you're grabbing from the request become method parameters before the method is called rather than the method just taking a request object.
JAX-RS makes you define what you want to send and what you want to receive in the method signature. I totally agree that people should start with thinking about what they want from an API, what to send, and what to receive. But is starting with OpenAPI something that would be making up for languages/frameworks that don't do development in that way naturally?
----------
Just to show I'm not picking on Go, I'm pretty sure one could create a Go framework more like this, I just haven't seen it:
type GetPersonRequest struct {
Request `path:/people/{personId}`
PersonId int `param:path`
}
func GetPerson(personRequest GetPersonRequest) Person {
return db.GetPerson(personRequest.PersonId)
}
I think you'd have to have the request object because Go can annotate struct fields (with struct tags), but can't annotate method parameters or functions (but I could be wrong). The point is that most languages/frameworks don't have the spec information in the code in an easy way to reflect on like JAX-RS, ASP.NET APIs, and some others do.Looking for the handler for ˋGET /foo/{fooID}/barˋ is terrible in a codebase using annotations.
- examples, they are a pain to write in Java annotations.
- multiple responses, ok, invalid id, not found, etc.
- good descriptions, you can write descriptions in annotations (particularly post Java 14) but they are overly verbose.
- validations, you can use bean validation, but if you implement the logic in code it's not easy to add that to the generated spec.
See for example this from springfox https://github.com/springfox/springfox/blob/master/springfox...
It's overly verbose and the generated open API spec is not very good.
2. I think I like this blunt elevator pitch much better than OP's multiple paragraphs of text...
For example, clicking a link which loads some data, then clicking edit (which isn't even an anchor), typing in & clicking stuff, then clicking the save button (don't click the cancel button!) would not be an interaction that would get picked up with your suggestion. Detecting loops becomes much more ambiguous and backtracking to get all the permutations of interactions becomes a whole other problem to solve.
You would also have to fill and submit forms with valid and invalid data. You would have to toggle checkboxes, change radio buttons, click buttons, (e.g. "Apply filters" after changing values in a product filter section), and generally go through many combinations of inputs to find all valid parameters and possible responses.
1. Wouldnt this also be helpful in understanding the exact nature of all traffic/calls against a particular page, user-workflow matriculating through your site from a UX perspective?
2. Could one make a proxy from this on a local home egress such that you could see the nature of outbound network traffic to site you visit (more importantly, traffic heading to 3rd-party trackers/cookies' APIs via your site visits?
3. Could it be used to nefariously map open API endpoints against a system one is (whiteHat) pen testing?
I was looking into how this works for inspiration, and it seems like the work of inferring the OpenAPI definition from recorded requests/responses is handled by the har-to-openapi nodejs library [2]. Is this by the same team? If not, kudos for packaging this up in a proxy -- seems like a useful interface into that library.
https://news.ycombinator.com/item?id=38012032
(5 months ago)
things to consider: - junk data from users will show up. unless your downstream service rejects extra params users will mess with you. - it documents each endpoint but its harder to say if this "user" data is the same as another's endpoints "user" - it is hard to tell if users are hitting all endpoint inputs/outputs without manual review.
(Although I'd be curious to see something very similar to this running in prod and generating WAF rules and/or alerting on suspicious requests. Kinda like Dynatrace or Splunk, but much more aware of the API documentation and expectations.)
> By watching API traffic live and automatically inferring endpoint structure, Akita bypasses the complexity of implementing and maintaining monitoring and observability.
> […]
> - Export your API description as an OpenAPI spec.
(Not affiliated, nor am I a user of either of these.)
I generated some specs from that!
I ran into trouble keeping them up to date.
I find that the real shortage of tools exists going the other way: from OpenAPI to code. The ecosystem has proven to be a huge disappointment there, comprising a janky collection of defective tools that still (years later) don't support version 3.1 (a critical update).